2025-05-05 00:00:11.474023 | Job console starting... 2025-05-05 00:00:11.489249 | Updating repositories 2025-05-05 00:00:11.682092 | Preparing job workspace 2025-05-05 00:00:13.080065 | Running Ansible setup... 2025-05-05 00:00:19.365441 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-05 00:00:20.361533 | 2025-05-05 00:00:20.361650 | PLAY [Base pre] 2025-05-05 00:00:20.439449 | 2025-05-05 00:00:20.439608 | TASK [Setup log path fact] 2025-05-05 00:00:20.482585 | orchestrator | ok 2025-05-05 00:00:20.562470 | 2025-05-05 00:00:20.562618 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-05 00:00:20.616052 | orchestrator | ok 2025-05-05 00:00:20.668358 | 2025-05-05 00:00:20.672534 | TASK [emit-job-header : Print job information] 2025-05-05 00:00:20.789285 | # Job Information 2025-05-05 00:00:20.789454 | Ansible Version: 2.15.3 2025-05-05 00:00:20.789489 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-05 00:00:20.789519 | Pipeline: periodic-midnight 2025-05-05 00:00:20.789540 | Executor: 7d211f194f6a 2025-05-05 00:00:20.789559 | Triggered by: https://github.com/osism/testbed 2025-05-05 00:00:20.789577 | Event ID: 25318ecb463c4afe9dcd0f639dac42ae 2025-05-05 00:00:20.802320 | 2025-05-05 00:00:20.802433 | LOOP [emit-job-header : Print node information] 2025-05-05 00:00:21.203604 | orchestrator | ok: 2025-05-05 00:00:21.203789 | orchestrator | # Node Information 2025-05-05 00:00:21.203819 | orchestrator | Inventory Hostname: orchestrator 2025-05-05 00:00:21.203839 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-05 00:00:21.203856 | orchestrator | Username: zuul-testbed04 2025-05-05 00:00:21.203872 | orchestrator | Distro: Debian 12.10 2025-05-05 00:00:21.203891 | orchestrator | Provider: static-testbed 2025-05-05 00:00:21.203907 | orchestrator | Label: testbed-orchestrator 2025-05-05 00:00:21.203924 | orchestrator | Product Name: OpenStack Nova 2025-05-05 00:00:21.203940 | orchestrator | Interface IP: 81.163.193.140 2025-05-05 00:00:21.223865 | 2025-05-05 00:00:21.223990 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-05 00:00:22.175297 | orchestrator -> localhost | changed 2025-05-05 00:00:22.182562 | 2025-05-05 00:00:22.182647 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-05 00:00:24.662742 | orchestrator -> localhost | changed 2025-05-05 00:00:24.679260 | 2025-05-05 00:00:24.679366 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-05 00:00:25.411973 | orchestrator -> localhost | ok 2025-05-05 00:00:25.419076 | 2025-05-05 00:00:25.419174 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-05 00:00:25.487640 | orchestrator | ok 2025-05-05 00:00:25.526196 | orchestrator | included: /var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-05 00:00:25.543815 | 2025-05-05 00:00:25.543913 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-05 00:00:27.130915 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-05 00:00:27.131091 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/work/a6ffc6a5efc64cf28e477909b96a1c4a_id_rsa 2025-05-05 00:00:27.131125 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/work/a6ffc6a5efc64cf28e477909b96a1c4a_id_rsa.pub 2025-05-05 00:00:27.131149 | orchestrator -> localhost | The key fingerprint is: 2025-05-05 00:00:27.131175 | orchestrator -> localhost | SHA256:gesATNNSPZjZI9PBglOCXjrdqlAOaguyL3BAgqZ2cnU zuul-build-sshkey 2025-05-05 00:00:27.131196 | orchestrator -> localhost | The key's randomart image is: 2025-05-05 00:00:27.131217 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-05 00:00:27.131236 | orchestrator -> localhost | |..+=oO.. | 2025-05-05 00:00:27.131255 | orchestrator -> localhost | |+=++O.BE | 2025-05-05 00:00:27.131284 | orchestrator -> localhost | |* *oo+oo. | 2025-05-05 00:00:27.131304 | orchestrator -> localhost | |+*.= . . . | 2025-05-05 00:00:27.131324 | orchestrator -> localhost | |o== o . S | 2025-05-05 00:00:27.131343 | orchestrator -> localhost | |*.o. o | 2025-05-05 00:00:27.131368 | orchestrator -> localhost | |*oo . | 2025-05-05 00:00:27.131389 | orchestrator -> localhost | |oo | 2025-05-05 00:00:27.131408 | orchestrator -> localhost | | o. | 2025-05-05 00:00:27.131428 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-05 00:00:27.131478 | orchestrator -> localhost | ok: Runtime: 0:00:00.511978 2025-05-05 00:00:27.144940 | 2025-05-05 00:00:27.145053 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-05 00:00:27.201726 | orchestrator | ok 2025-05-05 00:00:27.225757 | orchestrator | included: /var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-05 00:00:27.246958 | 2025-05-05 00:00:27.247066 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-05 00:00:27.291277 | orchestrator | skipping: Conditional result was False 2025-05-05 00:00:27.303575 | 2025-05-05 00:00:27.303679 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-05 00:00:27.941614 | orchestrator | changed 2025-05-05 00:00:27.969472 | 2025-05-05 00:00:27.969580 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-05 00:00:28.283975 | orchestrator | ok 2025-05-05 00:00:28.300216 | 2025-05-05 00:00:28.300322 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-05 00:00:28.963659 | orchestrator | ok 2025-05-05 00:00:28.995339 | 2025-05-05 00:00:28.995479 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-05 00:00:29.505840 | orchestrator | ok 2025-05-05 00:00:29.518177 | 2025-05-05 00:00:29.518271 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-05 00:00:29.570759 | orchestrator | skipping: Conditional result was False 2025-05-05 00:00:29.578035 | 2025-05-05 00:00:29.578125 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-05 00:00:30.018075 | orchestrator -> localhost | changed 2025-05-05 00:00:30.037744 | 2025-05-05 00:00:30.037861 | TASK [add-build-sshkey : Add back temp key] 2025-05-05 00:00:30.865984 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/work/a6ffc6a5efc64cf28e477909b96a1c4a_id_rsa (zuul-build-sshkey) 2025-05-05 00:00:30.866162 | orchestrator -> localhost | ok: Runtime: 0:00:00.009186 2025-05-05 00:00:30.873232 | 2025-05-05 00:00:30.873324 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-05 00:00:31.299676 | orchestrator | ok 2025-05-05 00:00:31.307898 | 2025-05-05 00:00:31.307983 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-05 00:00:31.367615 | orchestrator | skipping: Conditional result was False 2025-05-05 00:00:31.380537 | 2025-05-05 00:00:31.380634 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-05 00:00:31.867379 | orchestrator | ok 2025-05-05 00:00:31.899911 | 2025-05-05 00:00:31.900018 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-05 00:00:31.970169 | orchestrator | ok 2025-05-05 00:00:31.976337 | 2025-05-05 00:00:31.976438 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-05 00:00:32.378681 | orchestrator -> localhost | ok 2025-05-05 00:00:32.386675 | 2025-05-05 00:00:32.386763 | TASK [validate-host : Collect information about the host] 2025-05-05 00:00:33.645791 | orchestrator | ok 2025-05-05 00:00:33.683089 | 2025-05-05 00:00:33.683202 | TASK [validate-host : Sanitize hostname] 2025-05-05 00:00:33.771758 | orchestrator | ok 2025-05-05 00:00:33.779719 | 2025-05-05 00:00:33.779861 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-05 00:00:34.712880 | orchestrator -> localhost | changed 2025-05-05 00:00:34.720584 | 2025-05-05 00:00:34.720685 | TASK [validate-host : Collect information about zuul worker] 2025-05-05 00:00:35.465866 | orchestrator | ok 2025-05-05 00:00:35.475906 | 2025-05-05 00:00:35.476008 | TASK [validate-host : Write out all zuul information for each host] 2025-05-05 00:00:36.475393 | orchestrator -> localhost | changed 2025-05-05 00:00:36.489590 | 2025-05-05 00:00:36.489697 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-05 00:00:36.770602 | orchestrator | ok 2025-05-05 00:00:36.793689 | 2025-05-05 00:00:36.793852 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-05 00:00:54.881939 | orchestrator | changed: 2025-05-05 00:00:54.882184 | orchestrator | .d..t...... src/ 2025-05-05 00:00:54.882220 | orchestrator | .d..t...... src/github.com/ 2025-05-05 00:00:54.882244 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-05 00:00:54.882266 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-05 00:00:54.882285 | orchestrator | RedHat.yml 2025-05-05 00:00:54.896449 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-05 00:00:54.896472 | orchestrator | RedHat.yml 2025-05-05 00:00:54.896528 | orchestrator | = 2.2.0"... 2025-05-05 00:01:07.148664 | orchestrator | 00:01:07.148 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-05 00:01:07.227749 | orchestrator | 00:01:07.227 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-05 00:01:08.615701 | orchestrator | 00:01:08.615 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-05 00:01:09.678581 | orchestrator | 00:01:09.678 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-05 00:01:10.915415 | orchestrator | 00:01:10.915 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-05 00:01:11.966487 | orchestrator | 00:01:11.966 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-05 00:01:12.936267 | orchestrator | 00:01:12.935 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-05 00:01:14.132096 | orchestrator | 00:01:14.131 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-05 00:01:14.132159 | orchestrator | 00:01:14.132 STDOUT terraform: Providers are signed by their developers. 2025-05-05 00:01:14.132268 | orchestrator | 00:01:14.132 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-05 00:01:14.132413 | orchestrator | 00:01:14.132 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-05 00:01:14.132563 | orchestrator | 00:01:14.132 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-05 00:01:14.132736 | orchestrator | 00:01:14.132 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-05 00:01:14.132893 | orchestrator | 00:01:14.132 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-05 00:01:14.133044 | orchestrator | 00:01:14.132 STDOUT terraform: you run "tofu init" in the future. 2025-05-05 00:01:14.133053 | orchestrator | 00:01:14.132 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-05 00:01:14.133206 | orchestrator | 00:01:14.132 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-05 00:01:14.133286 | orchestrator | 00:01:14.133 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-05 00:01:14.133295 | orchestrator | 00:01:14.133 STDOUT terraform: should now work. 2025-05-05 00:01:14.133453 | orchestrator | 00:01:14.133 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-05 00:01:14.133602 | orchestrator | 00:01:14.133 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-05 00:01:14.133863 | orchestrator | 00:01:14.133 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-05 00:01:14.333233 | orchestrator | 00:01:14.333 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-05 00:01:14.540639 | orchestrator | 00:01:14.540 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-05 00:01:14.540787 | orchestrator | 00:01:14.540 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-05 00:01:14.540855 | orchestrator | 00:01:14.540 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-05 00:01:14.836245 | orchestrator | 00:01:14.540 STDOUT terraform: for this configuration. 2025-05-05 00:01:14.836397 | orchestrator | 00:01:14.836 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-05 00:01:14.954531 | orchestrator | 00:01:14.954 STDOUT terraform: ci.auto.tfvars 2025-05-05 00:01:15.672885 | orchestrator | 00:01:15.672 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-05 00:01:17.125739 | orchestrator | 00:01:17.125 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-05 00:01:17.659804 | orchestrator | 00:01:17.659 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-05 00:01:17.883186 | orchestrator | 00:01:17.882 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-05 00:01:17.883264 | orchestrator | 00:01:17.883 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-05 00:01:17.883274 | orchestrator | 00:01:17.883 STDOUT terraform:  + create 2025-05-05 00:01:17.883366 | orchestrator | 00:01:17.883 STDOUT terraform:  <= read (data resources) 2025-05-05 00:01:17.883437 | orchestrator | 00:01:17.883 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-05 00:01:17.883760 | orchestrator | 00:01:17.883 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-05 00:01:17.883838 | orchestrator | 00:01:17.883 STDOUT terraform:  # (config refers to values not yet known) 2025-05-05 00:01:17.883941 | orchestrator | 00:01:17.883 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-05 00:01:17.884062 | orchestrator | 00:01:17.883 STDOUT terraform:  + checksum = (known after apply) 2025-05-05 00:01:17.884115 | orchestrator | 00:01:17.884 STDOUT terraform:  + created_at = (known after apply) 2025-05-05 00:01:17.884180 | orchestrator | 00:01:17.884 STDOUT terraform:  + file = (known after apply) 2025-05-05 00:01:17.884243 | orchestrator | 00:01:17.884 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.884305 | orchestrator | 00:01:17.884 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.884378 | orchestrator | 00:01:17.884 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-05 00:01:17.884430 | orchestrator | 00:01:17.884 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-05 00:01:17.884486 | orchestrator | 00:01:17.884 STDOUT terraform:  + most_recent = true 2025-05-05 00:01:17.884534 | orchestrator | 00:01:17.884 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.884595 | orchestrator | 00:01:17.884 STDOUT terraform:  + protected = (known after apply) 2025-05-05 00:01:17.884662 | orchestrator | 00:01:17.884 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.884713 | orchestrator | 00:01:17.884 STDOUT terraform:  + schema = (known after apply) 2025-05-05 00:01:17.884774 | orchestrator | 00:01:17.884 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-05 00:01:17.884861 | orchestrator | 00:01:17.884 STDOUT terraform:  + tags = (known after apply) 2025-05-05 00:01:17.884924 | orchestrator | 00:01:17.884 STDOUT terraform:  + updated_at = (known after apply) 2025-05-05 00:01:17.884949 | orchestrator | 00:01:17.884 STDOUT terraform:  } 2025-05-05 00:01:17.885063 | orchestrator | 00:01:17.884 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-05 00:01:17.885129 | orchestrator | 00:01:17.885 STDOUT terraform:  # (config refers to values not yet known) 2025-05-05 00:01:17.885197 | orchestrator | 00:01:17.885 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-05 00:01:17.885259 | orchestrator | 00:01:17.885 STDOUT terraform:  + checksum = (known after apply) 2025-05-05 00:01:17.885318 | orchestrator | 00:01:17.885 STDOUT terraform:  + created_at = (known after apply) 2025-05-05 00:01:17.885385 | orchestrator | 00:01:17.885 STDOUT terraform:  + file = (known after apply) 2025-05-05 00:01:17.885439 | orchestrator | 00:01:17.885 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.885498 | orchestrator | 00:01:17.885 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.885556 | orchestrator | 00:01:17.885 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-05 00:01:17.885621 | orchestrator | 00:01:17.885 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-05 00:01:17.885656 | orchestrator | 00:01:17.885 STDOUT terraform:  + most_recent = true 2025-05-05 00:01:17.885723 | orchestrator | 00:01:17.885 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.885773 | orchestrator | 00:01:17.885 STDOUT terraform:  + protected = (known after apply) 2025-05-05 00:01:17.885858 | orchestrator | 00:01:17.885 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.885906 | orchestrator | 00:01:17.885 STDOUT terraform:  + schema = (known after apply) 2025-05-05 00:01:17.885967 | orchestrator | 00:01:17.885 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-05 00:01:17.886064 | orchestrator | 00:01:17.885 STDOUT terraform:  + tags = (known after apply) 2025-05-05 00:01:17.886126 | orchestrator | 00:01:17.886 STDOUT terraform:  + updated_at = (known after apply) 2025-05-05 00:01:17.886157 | orchestrator | 00:01:17.886 STDOUT terraform:  } 2025-05-05 00:01:17.886220 | orchestrator | 00:01:17.886 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-05 00:01:17.886282 | orchestrator | 00:01:17.886 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-05 00:01:17.886356 | orchestrator | 00:01:17.886 STDOUT terraform:  + content = (known after apply) 2025-05-05 00:01:17.886431 | orchestrator | 00:01:17.886 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-05 00:01:17.886509 | orchestrator | 00:01:17.886 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-05 00:01:17.886579 | orchestrator | 00:01:17.886 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-05 00:01:17.886649 | orchestrator | 00:01:17.886 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-05 00:01:17.886718 | orchestrator | 00:01:17.886 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-05 00:01:17.886789 | orchestrator | 00:01:17.886 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-05 00:01:17.886887 | orchestrator | 00:01:17.886 STDOUT terraform:  + directory_permission = "0777" 2025-05-05 00:01:17.886939 | orchestrator | 00:01:17.886 STDOUT terraform:  + file_permission = "0644" 2025-05-05 00:01:17.887013 | orchestrator | 00:01:17.886 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-05 00:01:17.887089 | orchestrator | 00:01:17.887 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.887116 | orchestrator | 00:01:17.887 STDOUT terraform:  } 2025-05-05 00:01:17.887176 | orchestrator | 00:01:17.887 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-05 00:01:17.887235 | orchestrator | 00:01:17.887 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-05 00:01:17.887303 | orchestrator | 00:01:17.887 STDOUT terraform:  + content = (known after apply) 2025-05-05 00:01:17.887375 | orchestrator | 00:01:17.887 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-05 00:01:17.887441 | orchestrator | 00:01:17.887 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-05 00:01:17.887527 | orchestrator | 00:01:17.887 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-05 00:01:17.887594 | orchestrator | 00:01:17.887 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-05 00:01:17.887656 | orchestrator | 00:01:17.887 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-05 00:01:17.887719 | orchestrator | 00:01:17.887 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-05 00:01:17.887762 | orchestrator | 00:01:17.887 STDOUT terraform:  + directory_permission = "0777" 2025-05-05 00:01:17.887813 | orchestrator | 00:01:17.887 STDOUT terraform:  + file_permission = "0644" 2025-05-05 00:01:17.887880 | orchestrator | 00:01:17.887 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-05 00:01:17.887961 | orchestrator | 00:01:17.887 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.888046 | orchestrator | 00:01:17.887 STDOUT terraform:  } 2025-05-05 00:01:17.888057 | orchestrator | 00:01:17.888 STDOUT terraform:  # local_file.inventory will be created 2025-05-05 00:01:17.888091 | orchestrator | 00:01:17.888 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-05 00:01:17.888152 | orchestrator | 00:01:17.888 STDOUT terraform:  + content = (known after apply) 2025-05-05 00:01:17.888223 | orchestrator | 00:01:17.888 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-05 00:01:17.888275 | orchestrator | 00:01:17.888 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-05 00:01:17.888336 | orchestrator | 00:01:17.888 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-05 00:01:17.888398 | orchestrator | 00:01:17.888 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-05 00:01:17.888458 | orchestrator | 00:01:17.888 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-05 00:01:17.888524 | orchestrator | 00:01:17.888 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-05 00:01:17.888566 | orchestrator | 00:01:17.888 STDOUT terraform:  + directory_permission = "0777" 2025-05-05 00:01:17.888615 | orchestrator | 00:01:17.888 STDOUT terraform:  + file_permission = "0644" 2025-05-05 00:01:17.888665 | orchestrator | 00:01:17.888 STDOUT terraform:  + filename = "inventory.ci" 2025-05-05 00:01:17.888726 | orchestrator | 00:01:17.888 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.888750 | orchestrator | 00:01:17.888 STDOUT terraform:  } 2025-05-05 00:01:17.888800 | orchestrator | 00:01:17.888 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-05 00:01:17.888894 | orchestrator | 00:01:17.888 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-05 00:01:17.888949 | orchestrator | 00:01:17.888 STDOUT terraform:  + content = (sensitive value) 2025-05-05 00:01:17.889009 | orchestrator | 00:01:17.888 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-05 00:01:17.889071 | orchestrator | 00:01:17.889 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-05 00:01:17.889131 | orchestrator | 00:01:17.889 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-05 00:01:17.889196 | orchestrator | 00:01:17.889 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-05 00:01:17.889252 | orchestrator | 00:01:17.889 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-05 00:01:17.889322 | orchestrator | 00:01:17.889 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-05 00:01:17.889354 | orchestrator | 00:01:17.889 STDOUT terraform:  + directory_permission = "0700" 2025-05-05 00:01:17.889394 | orchestrator | 00:01:17.889 STDOUT terraform:  + file_permission = "0600" 2025-05-05 00:01:17.889442 | orchestrator | 00:01:17.889 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-05 00:01:17.889501 | orchestrator | 00:01:17.889 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.889524 | orchestrator | 00:01:17.889 STDOUT terraform:  } 2025-05-05 00:01:17.889572 | orchestrator | 00:01:17.889 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-05 00:01:17.889620 | orchestrator | 00:01:17.889 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-05 00:01:17.889655 | orchestrator | 00:01:17.889 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.889677 | orchestrator | 00:01:17.889 STDOUT terraform:  } 2025-05-05 00:01:17.889756 | orchestrator | 00:01:17.889 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-05 00:01:17.889850 | orchestrator | 00:01:17.889 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-05 00:01:17.889899 | orchestrator | 00:01:17.889 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.889933 | orchestrator | 00:01:17.889 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.889983 | orchestrator | 00:01:17.889 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.890054 | orchestrator | 00:01:17.889 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.890103 | orchestrator | 00:01:17.890 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.890165 | orchestrator | 00:01:17.890 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-05 00:01:17.890214 | orchestrator | 00:01:17.890 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.890251 | orchestrator | 00:01:17.890 STDOUT terraform:  + size = 80 2025-05-05 00:01:17.890286 | orchestrator | 00:01:17.890 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.890308 | orchestrator | 00:01:17.890 STDOUT terraform:  } 2025-05-05 00:01:17.890384 | orchestrator | 00:01:17.890 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-05 00:01:17.890459 | orchestrator | 00:01:17.890 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-05 00:01:17.890510 | orchestrator | 00:01:17.890 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.890543 | orchestrator | 00:01:17.890 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.890593 | orchestrator | 00:01:17.890 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.890642 | orchestrator | 00:01:17.890 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.890693 | orchestrator | 00:01:17.890 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.890756 | orchestrator | 00:01:17.890 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-05 00:01:17.890806 | orchestrator | 00:01:17.890 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.890869 | orchestrator | 00:01:17.890 STDOUT terraform:  + size = 80 2025-05-05 00:01:17.890913 | orchestrator | 00:01:17.890 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.890936 | orchestrator | 00:01:17.890 STDOUT terraform:  } 2025-05-05 00:01:17.891013 | orchestrator | 00:01:17.890 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-05 00:01:17.891085 | orchestrator | 00:01:17.891 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-05 00:01:17.891138 | orchestrator | 00:01:17.891 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.891171 | orchestrator | 00:01:17.891 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.891227 | orchestrator | 00:01:17.891 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.891291 | orchestrator | 00:01:17.891 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.891342 | orchestrator | 00:01:17.891 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.891404 | orchestrator | 00:01:17.891 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-05 00:01:17.891471 | orchestrator | 00:01:17.891 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.891506 | orchestrator | 00:01:17.891 STDOUT terraform:  + size = 80 2025-05-05 00:01:17.891541 | orchestrator | 00:01:17.891 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.891563 | orchestrator | 00:01:17.891 STDOUT terraform:  } 2025-05-05 00:01:17.891645 | orchestrator | 00:01:17.891 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-05 00:01:17.891728 | orchestrator | 00:01:17.891 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-05 00:01:17.891779 | orchestrator | 00:01:17.891 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.891826 | orchestrator | 00:01:17.891 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.891880 | orchestrator | 00:01:17.891 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.891929 | orchestrator | 00:01:17.891 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.891979 | orchestrator | 00:01:17.891 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.892041 | orchestrator | 00:01:17.891 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-05 00:01:17.892091 | orchestrator | 00:01:17.892 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.892124 | orchestrator | 00:01:17.892 STDOUT terraform:  + size = 80 2025-05-05 00:01:17.892157 | orchestrator | 00:01:17.892 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.892179 | orchestrator | 00:01:17.892 STDOUT terraform:  } 2025-05-05 00:01:17.892254 | orchestrator | 00:01:17.892 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-05 00:01:17.892330 | orchestrator | 00:01:17.892 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-05 00:01:17.892384 | orchestrator | 00:01:17.892 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.892418 | orchestrator | 00:01:17.892 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.892468 | orchestrator | 00:01:17.892 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.892517 | orchestrator | 00:01:17.892 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.892574 | orchestrator | 00:01:17.892 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.892658 | orchestrator | 00:01:17.892 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-05 00:01:17.892733 | orchestrator | 00:01:17.892 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.892792 | orchestrator | 00:01:17.892 STDOUT terraform:  + size = 80 2025-05-05 00:01:17.892854 | orchestrator | 00:01:17.892 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.892888 | orchestrator | 00:01:17.892 STDOUT terraform:  } 2025-05-05 00:01:17.892968 | orchestrator | 00:01:17.892 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-05 00:01:17.893040 | orchestrator | 00:01:17.892 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-05 00:01:17.893090 | orchestrator | 00:01:17.893 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.893145 | orchestrator | 00:01:17.893 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.893231 | orchestrator | 00:01:17.893 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.893319 | orchestrator | 00:01:17.893 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.893370 | orchestrator | 00:01:17.893 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.893433 | orchestrator | 00:01:17.893 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-05 00:01:17.893483 | orchestrator | 00:01:17.893 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.893518 | orchestrator | 00:01:17.893 STDOUT terraform:  + size = 80 2025-05-05 00:01:17.893552 | orchestrator | 00:01:17.893 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.893573 | orchestrator | 00:01:17.893 STDOUT terraform:  } 2025-05-05 00:01:17.893647 | orchestrator | 00:01:17.893 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-05 00:01:17.893721 | orchestrator | 00:01:17.893 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-05 00:01:17.893770 | orchestrator | 00:01:17.893 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.893803 | orchestrator | 00:01:17.893 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.893916 | orchestrator | 00:01:17.893 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.893968 | orchestrator | 00:01:17.893 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.894034 | orchestrator | 00:01:17.893 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.894122 | orchestrator | 00:01:17.894 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-05 00:01:17.894173 | orchestrator | 00:01:17.894 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.894209 | orchestrator | 00:01:17.894 STDOUT terraform:  + size = 80 2025-05-05 00:01:17.894244 | orchestrator | 00:01:17.894 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.894265 | orchestrator | 00:01:17.894 STDOUT terraform:  } 2025-05-05 00:01:17.894336 | orchestrator | 00:01:17.894 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-05 00:01:17.894405 | orchestrator | 00:01:17.894 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.894467 | orchestrator | 00:01:17.894 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.894498 | orchestrator | 00:01:17.894 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.894548 | orchestrator | 00:01:17.894 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.894588 | orchestrator | 00:01:17.894 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.894641 | orchestrator | 00:01:17.894 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-05-05 00:01:17.894705 | orchestrator | 00:01:17.894 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.894739 | orchestrator | 00:01:17.894 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.894769 | orchestrator | 00:01:17.894 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.894797 | orchestrator | 00:01:17.894 STDOUT terraform:  } 2025-05-05 00:01:17.894895 | orchestrator | 00:01:17.894 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-05 00:01:17.894962 | orchestrator | 00:01:17.894 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.895007 | orchestrator | 00:01:17.894 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.895054 | orchestrator | 00:01:17.895 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.895103 | orchestrator | 00:01:17.895 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.895149 | orchestrator | 00:01:17.895 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.895205 | orchestrator | 00:01:17.895 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-05-05 00:01:17.895259 | orchestrator | 00:01:17.895 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.895292 | orchestrator | 00:01:17.895 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.895323 | orchestrator | 00:01:17.895 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.895343 | orchestrator | 00:01:17.895 STDOUT terraform:  } 2025-05-05 00:01:17.895441 | orchestrator | 00:01:17.895 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-05 00:01:17.895507 | orchestrator | 00:01:17.895 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.895552 | orchestrator | 00:01:17.895 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.895582 | orchestrator | 00:01:17.895 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.895628 | orchestrator | 00:01:17.895 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.895675 | orchestrator | 00:01:17.895 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.895725 | orchestrator | 00:01:17.895 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-05-05 00:01:17.895768 | orchestrator | 00:01:17.895 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.895799 | orchestrator | 00:01:17.895 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.895845 | orchestrator | 00:01:17.895 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.895864 | orchestrator | 00:01:17.895 STDOUT terraform:  } 2025-05-05 00:01:17.895927 | orchestrator | 00:01:17.895 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-05 00:01:17.895988 | orchestrator | 00:01:17.895 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.896030 | orchestrator | 00:01:17.895 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.896060 | orchestrator | 00:01:17.896 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.896105 | orchestrator | 00:01:17.896 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.896148 | orchestrator | 00:01:17.896 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.896202 | orchestrator | 00:01:17.896 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-05 00:01:17.896244 | orchestrator | 00:01:17.896 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.896274 | orchestrator | 00:01:17.896 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.896304 | orchestrator | 00:01:17.896 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.896323 | orchestrator | 00:01:17.896 STDOUT terraform:  } 2025-05-05 00:01:17.896385 | orchestrator | 00:01:17.896 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-05 00:01:17.896447 | orchestrator | 00:01:17.896 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.896489 | orchestrator | 00:01:17.896 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.896519 | orchestrator | 00:01:17.896 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.896580 | orchestrator | 00:01:17.896 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.896627 | orchestrator | 00:01:17.896 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.896680 | orchestrator | 00:01:17.896 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-05 00:01:17.896724 | orchestrator | 00:01:17.896 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.896753 | orchestrator | 00:01:17.896 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.896782 | orchestrator | 00:01:17.896 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.896801 | orchestrator | 00:01:17.896 STDOUT terraform:  } 2025-05-05 00:01:17.896879 | orchestrator | 00:01:17.896 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-05 00:01:17.896941 | orchestrator | 00:01:17.896 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.896983 | orchestrator | 00:01:17.896 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.897013 | orchestrator | 00:01:17.896 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.897057 | orchestrator | 00:01:17.897 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.897105 | orchestrator | 00:01:17.897 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.897155 | orchestrator | 00:01:17.897 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-05 00:01:17.897199 | orchestrator | 00:01:17.897 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.897228 | orchestrator | 00:01:17.897 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.897257 | orchestrator | 00:01:17.897 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.897276 | orchestrator | 00:01:17.897 STDOUT terraform:  } 2025-05-05 00:01:17.897338 | orchestrator | 00:01:17.897 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-05 00:01:17.897399 | orchestrator | 00:01:17.897 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.897441 | orchestrator | 00:01:17.897 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.897471 | orchestrator | 00:01:17.897 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.897515 | orchestrator | 00:01:17.897 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.897560 | orchestrator | 00:01:17.897 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.897611 | orchestrator | 00:01:17.897 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-05-05 00:01:17.897654 | orchestrator | 00:01:17.897 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.897683 | orchestrator | 00:01:17.897 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.897718 | orchestrator | 00:01:17.897 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.897726 | orchestrator | 00:01:17.897 STDOUT terraform:  } 2025-05-05 00:01:17.897802 | orchestrator | 00:01:17.897 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-05 00:01:17.897903 | orchestrator | 00:01:17.897 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.897949 | orchestrator | 00:01:17.897 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.897978 | orchestrator | 00:01:17.897 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.898051 | orchestrator | 00:01:17.897 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.898085 | orchestrator | 00:01:17.898 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.898139 | orchestrator | 00:01:17.898 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-05-05 00:01:17.898182 | orchestrator | 00:01:17.898 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.898212 | orchestrator | 00:01:17.898 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.898241 | orchestrator | 00:01:17.898 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.898259 | orchestrator | 00:01:17.898 STDOUT terraform:  } 2025-05-05 00:01:17.898316 | orchestrator | 00:01:17.898 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-05 00:01:17.898371 | orchestrator | 00:01:17.898 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.898411 | orchestrator | 00:01:17.898 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.898437 | orchestrator | 00:01:17.898 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.898476 | orchestrator | 00:01:17.898 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.898548 | orchestrator | 00:01:17.898 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.898597 | orchestrator | 00:01:17.898 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-05-05 00:01:17.898638 | orchestrator | 00:01:17.898 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.898664 | orchestrator | 00:01:17.898 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.898691 | orchestrator | 00:01:17.898 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.898709 | orchestrator | 00:01:17.898 STDOUT terraform:  } 2025-05-05 00:01:17.898768 | orchestrator | 00:01:17.898 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-05-05 00:01:17.898860 | orchestrator | 00:01:17.898 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.898870 | orchestrator | 00:01:17.898 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.898896 | orchestrator | 00:01:17.898 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.898938 | orchestrator | 00:01:17.898 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.898976 | orchestrator | 00:01:17.898 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.899025 | orchestrator | 00:01:17.898 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-05-05 00:01:17.899065 | orchestrator | 00:01:17.899 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.899092 | orchestrator | 00:01:17.899 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.899119 | orchestrator | 00:01:17.899 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.899137 | orchestrator | 00:01:17.899 STDOUT terraform:  } 2025-05-05 00:01:17.899195 | orchestrator | 00:01:17.899 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-05-05 00:01:17.899251 | orchestrator | 00:01:17.899 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.899289 | orchestrator | 00:01:17.899 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.899315 | orchestrator | 00:01:17.899 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.899355 | orchestrator | 00:01:17.899 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.899398 | orchestrator | 00:01:17.899 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.899443 | orchestrator | 00:01:17.899 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-05-05 00:01:17.899481 | orchestrator | 00:01:17.899 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.899506 | orchestrator | 00:01:17.899 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.899533 | orchestrator | 00:01:17.899 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.899549 | orchestrator | 00:01:17.899 STDOUT terraform:  } 2025-05-05 00:01:17.899608 | orchestrator | 00:01:17.899 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-05-05 00:01:17.899662 | orchestrator | 00:01:17.899 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.899700 | orchestrator | 00:01:17.899 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.899727 | orchestrator | 00:01:17.899 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.899767 | orchestrator | 00:01:17.899 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.899807 | orchestrator | 00:01:17.899 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.899876 | orchestrator | 00:01:17.899 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-05-05 00:01:17.899918 | orchestrator | 00:01:17.899 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.899943 | orchestrator | 00:01:17.899 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.899972 | orchestrator | 00:01:17.899 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.899988 | orchestrator | 00:01:17.899 STDOUT terraform:  } 2025-05-05 00:01:17.900046 | orchestrator | 00:01:17.899 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-05-05 00:01:17.900099 | orchestrator | 00:01:17.900 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.900139 | orchestrator | 00:01:17.900 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.900169 | orchestrator | 00:01:17.900 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.900209 | orchestrator | 00:01:17.900 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.900249 | orchestrator | 00:01:17.900 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.900297 | orchestrator | 00:01:17.900 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-05-05 00:01:17.900335 | orchestrator | 00:01:17.900 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.900362 | orchestrator | 00:01:17.900 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.900392 | orchestrator | 00:01:17.900 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.900409 | orchestrator | 00:01:17.900 STDOUT terraform:  } 2025-05-05 00:01:17.900465 | orchestrator | 00:01:17.900 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-05-05 00:01:17.900520 | orchestrator | 00:01:17.900 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.900559 | orchestrator | 00:01:17.900 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.900585 | orchestrator | 00:01:17.900 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.900624 | orchestrator | 00:01:17.900 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.900663 | orchestrator | 00:01:17.900 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.900714 | orchestrator | 00:01:17.900 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-05-05 00:01:17.900752 | orchestrator | 00:01:17.900 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.900779 | orchestrator | 00:01:17.900 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.900805 | orchestrator | 00:01:17.900 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.900851 | orchestrator | 00:01:17.900 STDOUT terraform:  } 2025-05-05 00:01:17.900908 | orchestrator | 00:01:17.900 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-05-05 00:01:17.900965 | orchestrator | 00:01:17.900 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.901006 | orchestrator | 00:01:17.900 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.901038 | orchestrator | 00:01:17.901 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.901078 | orchestrator | 00:01:17.901 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.901114 | orchestrator | 00:01:17.901 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.901158 | orchestrator | 00:01:17.901 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-05-05 00:01:17.901195 | orchestrator | 00:01:17.901 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.901222 | orchestrator | 00:01:17.901 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.901247 | orchestrator | 00:01:17.901 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.901265 | orchestrator | 00:01:17.901 STDOUT terraform:  } 2025-05-05 00:01:17.901321 | orchestrator | 00:01:17.901 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-05-05 00:01:17.901373 | orchestrator | 00:01:17.901 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.901410 | orchestrator | 00:01:17.901 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.901434 | orchestrator | 00:01:17.901 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.901472 | orchestrator | 00:01:17.901 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.901511 | orchestrator | 00:01:17.901 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.901554 | orchestrator | 00:01:17.901 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-05-05 00:01:17.901591 | orchestrator | 00:01:17.901 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.901616 | orchestrator | 00:01:17.901 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.901641 | orchestrator | 00:01:17.901 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.901658 | orchestrator | 00:01:17.901 STDOUT terraform:  } 2025-05-05 00:01:17.901753 | orchestrator | 00:01:17.901 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-05-05 00:01:17.901863 | orchestrator | 00:01:17.901 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.901888 | orchestrator | 00:01:17.901 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.901895 | orchestrator | 00:01:17.901 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.901931 | orchestrator | 00:01:17.901 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.901965 | orchestrator | 00:01:17.901 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.902027 | orchestrator | 00:01:17.901 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-05-05 00:01:17.902064 | orchestrator | 00:01:17.902 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.902090 | orchestrator | 00:01:17.902 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.902116 | orchestrator | 00:01:17.902 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.902135 | orchestrator | 00:01:17.902 STDOUT terraform:  } 2025-05-05 00:01:17.902191 | orchestrator | 00:01:17.902 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-05-05 00:01:17.902253 | orchestrator | 00:01:17.902 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-05 00:01:17.902290 | orchestrator | 00:01:17.902 STDOUT terraform:  + attachment = (known after apply) 2025-05-05 00:01:17.902318 | orchestrator | 00:01:17.902 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.902357 | orchestrator | 00:01:17.902 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.902394 | orchestrator | 00:01:17.902 STDOUT terraform:  + metadata = (known after apply) 2025-05-05 00:01:17.902440 | orchestrator | 00:01:17.902 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-05-05 00:01:17.902477 | orchestrator | 00:01:17.902 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.902503 | orchestrator | 00:01:17.902 STDOUT terraform:  + size = 20 2025-05-05 00:01:17.902529 | orchestrator | 00:01:17.902 STDOUT terraform:  + volume_type = "ssd" 2025-05-05 00:01:17.902546 | orchestrator | 00:01:17.902 STDOUT terraform:  } 2025-05-05 00:01:17.902596 | orchestrator | 00:01:17.902 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-05 00:01:17.902647 | orchestrator | 00:01:17.902 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-05 00:01:17.902689 | orchestrator | 00:01:17.902 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-05 00:01:17.902733 | orchestrator | 00:01:17.902 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-05 00:01:17.902773 | orchestrator | 00:01:17.902 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-05 00:01:17.902833 | orchestrator | 00:01:17.902 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.902859 | orchestrator | 00:01:17.902 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.902886 | orchestrator | 00:01:17.902 STDOUT terraform:  + config_drive = true 2025-05-05 00:01:17.902928 | orchestrator | 00:01:17.902 STDOUT terraform:  + created = (known after apply) 2025-05-05 00:01:17.902971 | orchestrator | 00:01:17.902 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-05 00:01:17.903008 | orchestrator | 00:01:17.902 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-05 00:01:17.903037 | orchestrator | 00:01:17.903 STDOUT terraform:  + force_delete = false 2025-05-05 00:01:17.903079 | orchestrator | 00:01:17.903 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.903120 | orchestrator | 00:01:17.903 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.903164 | orchestrator | 00:01:17.903 STDOUT terraform:  + image_name = (known after apply) 2025-05-05 00:01:17.903195 | orchestrator | 00:01:17.903 STDOUT terraform:  + key_pair = "testbed" 2025-05-05 00:01:17.903232 | orchestrator | 00:01:17.903 STDOUT terraform:  + name = "testbed-manager" 2025-05-05 00:01:17.903263 | orchestrator | 00:01:17.903 STDOUT terraform:  + power_state = "active" 2025-05-05 00:01:17.903305 | orchestrator | 00:01:17.903 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.903345 | orchestrator | 00:01:17.903 STDOUT terraform:  + security_groups = (known after apply) 2025-05-05 00:01:17.903375 | orchestrator | 00:01:17.903 STDOUT terraform:  + stop_before_destroy = false 2025-05-05 00:01:17.903417 | orchestrator | 00:01:17.903 STDOUT terraform:  + updated = (known after apply) 2025-05-05 00:01:17.903462 | orchestrator | 00:01:17.903 STDOUT terraform:  + user_data = (known after apply) 2025-05-05 00:01:17.903479 | orchestrator | 00:01:17.903 STDOUT terraform:  + block_device { 2025-05-05 00:01:17.903509 | orchestrator | 00:01:17.903 STDOUT terraform:  + boot_index = 0 2025-05-05 00:01:17.903555 | orchestrator | 00:01:17.903 STDOUT terraform:  + delete_on_termination = false 2025-05-05 00:01:17.903602 | orchestrator | 00:01:17.903 STDOUT terraform:  + destination_type = "volume" 2025-05-05 00:01:17.903636 | orchestrator | 00:01:17.903 STDOUT terraform:  + multiattach = false 2025-05-05 00:01:17.903671 | orchestrator | 00:01:17.903 STDOUT terraform:  + source_type = "volume" 2025-05-05 00:01:17.903718 | orchestrator | 00:01:17.903 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.903739 | orchestrator | 00:01:17.903 STDOUT terraform:  } 2025-05-05 00:01:17.903747 | orchestrator | 00:01:17.903 STDOUT terraform:  + network { 2025-05-05 00:01:17.903773 | orchestrator | 00:01:17.903 STDOUT terraform:  + access_network = false 2025-05-05 00:01:17.903812 | orchestrator | 00:01:17.903 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-05 00:01:17.903866 | orchestrator | 00:01:17.903 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-05 00:01:17.903903 | orchestrator | 00:01:17.903 STDOUT terraform:  + mac = (known after apply) 2025-05-05 00:01:17.903941 | orchestrator | 00:01:17.903 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.903980 | orchestrator | 00:01:17.903 STDOUT terraform:  + port = (known after apply) 2025-05-05 00:01:17.904019 | orchestrator | 00:01:17.903 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.904035 | orchestrator | 00:01:17.904 STDOUT terraform:  } 2025-05-05 00:01:17.904054 | orchestrator | 00:01:17.904 STDOUT terraform:  } 2025-05-05 00:01:17.904175 | orchestrator | 00:01:17.904 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-05 00:01:17.904224 | orchestrator | 00:01:17.904 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-05 00:01:17.904265 | orchestrator | 00:01:17.904 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-05 00:01:17.904307 | orchestrator | 00:01:17.904 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-05 00:01:17.904349 | orchestrator | 00:01:17.904 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-05 00:01:17.904393 | orchestrator | 00:01:17.904 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.904432 | orchestrator | 00:01:17.904 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.904460 | orchestrator | 00:01:17.904 STDOUT terraform:  + config_drive = true 2025-05-05 00:01:17.904503 | orchestrator | 00:01:17.904 STDOUT terraform:  + created = (known after apply) 2025-05-05 00:01:17.904549 | orchestrator | 00:01:17.904 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-05 00:01:17.904583 | orchestrator | 00:01:17.904 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-05 00:01:17.904611 | orchestrator | 00:01:17.904 STDOUT terraform:  + force_delete = false 2025-05-05 00:01:17.904655 | orchestrator | 00:01:17.904 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.904698 | orchestrator | 00:01:17.904 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.904739 | orchestrator | 00:01:17.904 STDOUT terraform:  + image_name = (known after apply) 2025-05-05 00:01:17.904771 | orchestrator | 00:01:17.904 STDOUT terraform:  + key_pair = "testbed" 2025-05-05 00:01:17.904808 | orchestrator | 00:01:17.904 STDOUT terraform:  + name = "testbed-node-0" 2025-05-05 00:01:17.904862 | orchestrator | 00:01:17.904 STDOUT terraform:  + power_state = "active" 2025-05-05 00:01:17.904907 | orchestrator | 00:01:17.904 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.904949 | orchestrator | 00:01:17.904 STDOUT terraform:  + security_groups = (known after apply) 2025-05-05 00:01:17.904977 | orchestrator | 00:01:17.904 STDOUT terraform:  + stop_before_destroy = false 2025-05-05 00:01:17.905020 | orchestrator | 00:01:17.904 STDOUT terraform:  + updated = (known after apply) 2025-05-05 00:01:17.905082 | orchestrator | 00:01:17.905 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-05 00:01:17.905104 | orchestrator | 00:01:17.905 STDOUT terraform:  + block_device { 2025-05-05 00:01:17.905134 | orchestrator | 00:01:17.905 STDOUT terraform:  + boot_index = 0 2025-05-05 00:01:17.905169 | orchestrator | 00:01:17.905 STDOUT terraform:  + delete_on_termination = false 2025-05-05 00:01:17.905205 | orchestrator | 00:01:17.905 STDOUT terraform:  + destination_type = "volume" 2025-05-05 00:01:17.905241 | orchestrator | 00:01:17.905 STDOUT terraform:  + multiattach = false 2025-05-05 00:01:17.905277 | orchestrator | 00:01:17.905 STDOUT terraform:  + source_type = "volume" 2025-05-05 00:01:17.905323 | orchestrator | 00:01:17.905 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.905341 | orchestrator | 00:01:17.905 STDOUT terraform:  } 2025-05-05 00:01:17.905360 | orchestrator | 00:01:17.905 STDOUT terraform:  + network { 2025-05-05 00:01:17.905385 | orchestrator | 00:01:17.905 STDOUT terraform:  + access_network = false 2025-05-05 00:01:17.905421 | orchestrator | 00:01:17.905 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-05 00:01:17.905454 | orchestrator | 00:01:17.905 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-05 00:01:17.905490 | orchestrator | 00:01:17.905 STDOUT terraform:  + mac = (known after apply) 2025-05-05 00:01:17.905526 | orchestrator | 00:01:17.905 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.905566 | orchestrator | 00:01:17.905 STDOUT terraform:  + port = (known after apply) 2025-05-05 00:01:17.905597 | orchestrator | 00:01:17.905 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.905613 | orchestrator | 00:01:17.905 STDOUT terraform:  } 2025-05-05 00:01:17.905620 | orchestrator | 00:01:17.905 STDOUT terraform:  } 2025-05-05 00:01:17.905669 | orchestrator | 00:01:17.905 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-05 00:01:17.905715 | orchestrator | 00:01:17.905 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-05 00:01:17.905754 | orchestrator | 00:01:17.905 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-05 00:01:17.905790 | orchestrator | 00:01:17.905 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-05 00:01:17.905844 | orchestrator | 00:01:17.905 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-05 00:01:17.905883 | orchestrator | 00:01:17.905 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.905909 | orchestrator | 00:01:17.905 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.905932 | orchestrator | 00:01:17.905 STDOUT terraform:  + config_drive = true 2025-05-05 00:01:17.905973 | orchestrator | 00:01:17.905 STDOUT terraform:  + created = (known after apply) 2025-05-05 00:01:17.906009 | orchestrator | 00:01:17.905 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-05 00:01:17.906060 | orchestrator | 00:01:17.906 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-05 00:01:17.906088 | orchestrator | 00:01:17.906 STDOUT terraform:  + force_delete = false 2025-05-05 00:01:17.906135 | orchestrator | 00:01:17.906 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.906174 | orchestrator | 00:01:17.906 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.906213 | orchestrator | 00:01:17.906 STDOUT terraform:  + image_name = (known after apply) 2025-05-05 00:01:17.906240 | orchestrator | 00:01:17.906 STDOUT terraform:  + key_pair = "testbed" 2025-05-05 00:01:17.906279 | orchestrator | 00:01:17.906 STDOUT terraform:  + name = "testbed-node-1" 2025-05-05 00:01:17.906307 | orchestrator | 00:01:17.906 STDOUT terraform:  + power_state = "active" 2025-05-05 00:01:17.906355 | orchestrator | 00:01:17.906 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.906396 | orchestrator | 00:01:17.906 STDOUT terraform:  + security_groups = (known after apply) 2025-05-05 00:01:17.906421 | orchestrator | 00:01:17.906 STDOUT terraform:  + stop_before_destroy = false 2025-05-05 00:01:17.906460 | orchestrator | 00:01:17.906 STDOUT terraform:  + updated = (known after apply) 2025-05-05 00:01:17.906517 | orchestrator | 00:01:17.906 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-05 00:01:17.906542 | orchestrator | 00:01:17.906 STDOUT terraform:  + block_device { 2025-05-05 00:01:17.906574 | orchestrator | 00:01:17.906 STDOUT terraform:  + boot_index = 0 2025-05-05 00:01:17.906616 | orchestrator | 00:01:17.906 STDOUT terraform:  + delete_on_termination = false 2025-05-05 00:01:17.906663 | orchestrator | 00:01:17.906 STDOUT terraform:  + destination_type = "volume" 2025-05-05 00:01:17.906719 | orchestrator | 00:01:17.906 STDOUT terraform:  + multiattach = false 2025-05-05 00:01:17.906753 | orchestrator | 00:01:17.906 STDOUT terraform:  + source_type = "volume" 2025-05-05 00:01:17.906796 | orchestrator | 00:01:17.906 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.906812 | orchestrator | 00:01:17.906 STDOUT terraform:  } 2025-05-05 00:01:17.906837 | orchestrator | 00:01:17.906 STDOUT terraform:  + network { 2025-05-05 00:01:17.906866 | orchestrator | 00:01:17.906 STDOUT terraform:  + access_network = false 2025-05-05 00:01:17.906910 | orchestrator | 00:01:17.906 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-05 00:01:17.906946 | orchestrator | 00:01:17.906 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-05 00:01:17.906983 | orchestrator | 00:01:17.906 STDOUT terraform:  + mac = (known after apply) 2025-05-05 00:01:17.907018 | orchestrator | 00:01:17.906 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.907053 | orchestrator | 00:01:17.907 STDOUT terraform:  + port = (known after apply) 2025-05-05 00:01:17.907087 | orchestrator | 00:01:17.907 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.907103 | orchestrator | 00:01:17.907 STDOUT terraform:  } 2025-05-05 00:01:17.907124 | orchestrator | 00:01:17.907 STDOUT terraform:  } 2025-05-05 00:01:17.907174 | orchestrator | 00:01:17.907 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-05 00:01:17.907220 | orchestrator | 00:01:17.907 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-05 00:01:17.907268 | orchestrator | 00:01:17.907 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-05 00:01:17.907307 | orchestrator | 00:01:17.907 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-05 00:01:17.907348 | orchestrator | 00:01:17.907 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-05 00:01:17.907383 | orchestrator | 00:01:17.907 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.907409 | orchestrator | 00:01:17.907 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.907432 | orchestrator | 00:01:17.907 STDOUT terraform:  + config_drive = true 2025-05-05 00:01:17.907471 | orchestrator | 00:01:17.907 STDOUT terraform:  + created = (known after apply) 2025-05-05 00:01:17.907511 | orchestrator | 00:01:17.907 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-05 00:01:17.907542 | orchestrator | 00:01:17.907 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-05 00:01:17.907569 | orchestrator | 00:01:17.907 STDOUT terraform:  + force_delete = false 2025-05-05 00:01:17.907608 | orchestrator | 00:01:17.907 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.907647 | orchestrator | 00:01:17.907 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.907685 | orchestrator | 00:01:17.907 STDOUT terraform:  + image_name = (known after apply) 2025-05-05 00:01:17.907713 | orchestrator | 00:01:17.907 STDOUT terraform:  + key_pair = "testbed" 2025-05-05 00:01:17.907746 | orchestrator | 00:01:17.907 STDOUT terraform:  + name = "testbed-node-2" 2025-05-05 00:01:17.907774 | orchestrator | 00:01:17.907 STDOUT terraform:  + power_state = "active" 2025-05-05 00:01:17.907851 | orchestrator | 00:01:17.907 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.907884 | orchestrator | 00:01:17.907 STDOUT terraform:  + security_groups = (known after apply) 2025-05-05 00:01:17.907910 | orchestrator | 00:01:17.907 STDOUT terraform:  + stop_before_destroy = false 2025-05-05 00:01:17.907948 | orchestrator | 00:01:17.907 STDOUT terraform:  + updated = (known after apply) 2025-05-05 00:01:17.908002 | orchestrator | 00:01:17.907 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-05 00:01:17.908021 | orchestrator | 00:01:17.907 STDOUT terraform:  + block_device { 2025-05-05 00:01:17.908048 | orchestrator | 00:01:17.908 STDOUT terraform:  + boot_index = 0 2025-05-05 00:01:17.908079 | orchestrator | 00:01:17.908 STDOUT terraform:  + delete_on_termination = false 2025-05-05 00:01:17.908116 | orchestrator | 00:01:17.908 STDOUT terraform:  + destination_type = "volume" 2025-05-05 00:01:17.908143 | orchestrator | 00:01:17.908 STDOUT terraform:  + multiattach = false 2025-05-05 00:01:17.908175 | orchestrator | 00:01:17.908 STDOUT terraform:  + source_type = "volume" 2025-05-05 00:01:17.908215 | orchestrator | 00:01:17.908 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.908232 | orchestrator | 00:01:17.908 STDOUT terraform:  } 2025-05-05 00:01:17.908254 | orchestrator | 00:01:17.908 STDOUT terraform:  + network { 2025-05-05 00:01:17.908276 | orchestrator | 00:01:17.908 STDOUT terraform:  + access_network = false 2025-05-05 00:01:17.908309 | orchestrator | 00:01:17.908 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-05 00:01:17.908340 | orchestrator | 00:01:17.908 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-05 00:01:17.908371 | orchestrator | 00:01:17.908 STDOUT terraform:  + mac = (known after apply) 2025-05-05 00:01:17.908403 | orchestrator | 00:01:17.908 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.908437 | orchestrator | 00:01:17.908 STDOUT terraform:  + port = (known after apply) 2025-05-05 00:01:17.908473 | orchestrator | 00:01:17.908 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.908489 | orchestrator | 00:01:17.908 STDOUT terraform:  } 2025-05-05 00:01:17.908496 | orchestrator | 00:01:17.908 STDOUT terraform:  } 2025-05-05 00:01:17.908541 | orchestrator | 00:01:17.908 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-05 00:01:17.908593 | orchestrator | 00:01:17.908 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-05 00:01:17.908629 | orchestrator | 00:01:17.908 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-05 00:01:17.908663 | orchestrator | 00:01:17.908 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-05 00:01:17.908697 | orchestrator | 00:01:17.908 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-05 00:01:17.908733 | orchestrator | 00:01:17.908 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.908757 | orchestrator | 00:01:17.908 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.908776 | orchestrator | 00:01:17.908 STDOUT terraform:  + config_drive = true 2025-05-05 00:01:17.908811 | orchestrator | 00:01:17.908 STDOUT terraform:  + created = (known after apply) 2025-05-05 00:01:17.908865 | orchestrator | 00:01:17.908 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-05 00:01:17.908899 | orchestrator | 00:01:17.908 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-05 00:01:17.908921 | orchestrator | 00:01:17.908 STDOUT terraform:  + force_delete = false 2025-05-05 00:01:17.908957 | orchestrator | 00:01:17.908 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.908996 | orchestrator | 00:01:17.908 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.909028 | orchestrator | 00:01:17.908 STDOUT terraform:  + image_name = (known after apply) 2025-05-05 00:01:17.909076 | orchestrator | 00:01:17.909 STDOUT terraform:  + key_pair = "testbed" 2025-05-05 00:01:17.909107 | orchestrator | 00:01:17.909 STDOUT terraform:  + name = "testbed-node-3" 2025-05-05 00:01:17.909133 | orchestrator | 00:01:17.909 STDOUT terraform:  + power_state = "active" 2025-05-05 00:01:17.909169 | orchestrator | 00:01:17.909 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.909204 | orchestrator | 00:01:17.909 STDOUT terraform:  + security_groups = (known after apply) 2025-05-05 00:01:17.909229 | orchestrator | 00:01:17.909 STDOUT terraform:  + stop_before_destroy = false 2025-05-05 00:01:17.909264 | orchestrator | 00:01:17.909 STDOUT terraform:  + updated = (known after apply) 2025-05-05 00:01:17.909335 | orchestrator | 00:01:17.909 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-05 00:01:17.909354 | orchestrator | 00:01:17.909 STDOUT terraform:  + block_device { 2025-05-05 00:01:17.909381 | orchestrator | 00:01:17.909 STDOUT terraform:  + boot_index = 0 2025-05-05 00:01:17.909410 | orchestrator | 00:01:17.909 STDOUT terraform:  + delete_on_termination = false 2025-05-05 00:01:17.909455 | orchestrator | 00:01:17.909 STDOUT terraform:  + destination_type = "volume" 2025-05-05 00:01:17.909489 | orchestrator | 00:01:17.909 STDOUT terraform:  + multiattach = false 2025-05-05 00:01:17.909516 | orchestrator | 00:01:17.909 STDOUT terraform:  + source_type = "volume" 2025-05-05 00:01:17.909556 | orchestrator | 00:01:17.909 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.909571 | orchestrator | 00:01:17.909 STDOUT terraform:  } 2025-05-05 00:01:17.909587 | orchestrator | 00:01:17.909 STDOUT terraform:  + network { 2025-05-05 00:01:17.909608 | orchestrator | 00:01:17.909 STDOUT terraform:  + access_network = false 2025-05-05 00:01:17.909640 | orchestrator | 00:01:17.909 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-05 00:01:17.909673 | orchestrator | 00:01:17.909 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-05 00:01:17.909705 | orchestrator | 00:01:17.909 STDOUT terraform:  + mac = (known after apply) 2025-05-05 00:01:17.909737 | orchestrator | 00:01:17.909 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.909770 | orchestrator | 00:01:17.909 STDOUT terraform:  + port = (known after apply) 2025-05-05 00:01:17.909804 | orchestrator | 00:01:17.909 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.909852 | orchestrator | 00:01:17.909 STDOUT terraform:  } 2025-05-05 00:01:17.909883 | orchestrator | 00:01:17.909 STDOUT terraform:  } 2025-05-05 00:01:17.909890 | orchestrator | 00:01:17.909 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-05 00:01:17.909926 | orchestrator | 00:01:17.909 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-05 00:01:17.909961 | orchestrator | 00:01:17.909 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-05 00:01:17.909995 | orchestrator | 00:01:17.909 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-05 00:01:17.910053 | orchestrator | 00:01:17.909 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-05 00:01:17.910088 | orchestrator | 00:01:17.910 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.910112 | orchestrator | 00:01:17.910 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.910133 | orchestrator | 00:01:17.910 STDOUT terraform:  + config_drive = true 2025-05-05 00:01:17.910186 | orchestrator | 00:01:17.910 STDOUT terraform:  + created = (known after apply) 2025-05-05 00:01:17.910222 | orchestrator | 00:01:17.910 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-05 00:01:17.910252 | orchestrator | 00:01:17.910 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-05 00:01:17.910282 | orchestrator | 00:01:17.910 STDOUT terraform:  + force_delete = false 2025-05-05 00:01:17.910321 | orchestrator | 00:01:17.910 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.910358 | orchestrator | 00:01:17.910 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.910397 | orchestrator | 00:01:17.910 STDOUT terraform:  + image_name = (known after apply) 2025-05-05 00:01:17.910422 | orchestrator | 00:01:17.910 STDOUT terraform:  + key_pair = "testbed" 2025-05-05 00:01:17.910453 | orchestrator | 00:01:17.910 STDOUT terraform:  + name = "testbed-node-4" 2025-05-05 00:01:17.910479 | orchestrator | 00:01:17.910 STDOUT terraform:  + power_state = "active" 2025-05-05 00:01:17.910515 | orchestrator | 00:01:17.910 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.910550 | orchestrator | 00:01:17.910 STDOUT terraform:  + security_groups = (known after apply) 2025-05-05 00:01:17.910573 | orchestrator | 00:01:17.910 STDOUT terraform:  + stop_before_destroy = false 2025-05-05 00:01:17.910609 | orchestrator | 00:01:17.910 STDOUT terraform:  + updated = (known after apply) 2025-05-05 00:01:17.910658 | orchestrator | 00:01:17.910 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-05 00:01:17.910677 | orchestrator | 00:01:17.910 STDOUT terraform:  + block_device { 2025-05-05 00:01:17.910701 | orchestrator | 00:01:17.910 STDOUT terraform:  + boot_index = 0 2025-05-05 00:01:17.910730 | orchestrator | 00:01:17.910 STDOUT terraform:  + delete_on_termination = false 2025-05-05 00:01:17.910759 | orchestrator | 00:01:17.910 STDOUT terraform:  + destination_type = "volume" 2025-05-05 00:01:17.910804 | orchestrator | 00:01:17.910 STDOUT terraform:  + multiattach = false 2025-05-05 00:01:17.910883 | orchestrator | 00:01:17.910 STDOUT terraform:  + source_type = "volume" 2025-05-05 00:01:17.910956 | orchestrator | 00:01:17.910 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.910975 | orchestrator | 00:01:17.910 STDOUT terraform:  } 2025-05-05 00:01:17.911005 | orchestrator | 00:01:17.910 STDOUT terraform:  + network { 2025-05-05 00:01:17.911028 | orchestrator | 00:01:17.911 STDOUT terraform:  + access_network = false 2025-05-05 00:01:17.911086 | orchestrator | 00:01:17.911 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-05 00:01:17.911103 | orchestrator | 00:01:17.911 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-05 00:01:17.911149 | orchestrator | 00:01:17.911 STDOUT terraform:  + mac = (known after apply) 2025-05-05 00:01:17.911188 | orchestrator | 00:01:17.911 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.911215 | orchestrator | 00:01:17.911 STDOUT terraform:  + port = (known after apply) 2025-05-05 00:01:17.911250 | orchestrator | 00:01:17.911 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.911266 | orchestrator | 00:01:17.911 STDOUT terraform:  } 2025-05-05 00:01:17.911283 | orchestrator | 00:01:17.911 STDOUT terraform:  } 2025-05-05 00:01:17.911329 | orchestrator | 00:01:17.911 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-05 00:01:17.911374 | orchestrator | 00:01:17.911 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-05 00:01:17.911409 | orchestrator | 00:01:17.911 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-05 00:01:17.911444 | orchestrator | 00:01:17.911 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-05 00:01:17.911490 | orchestrator | 00:01:17.911 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-05 00:01:17.911521 | orchestrator | 00:01:17.911 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.911556 | orchestrator | 00:01:17.911 STDOUT terraform:  + availability_zone = "nova" 2025-05-05 00:01:17.911589 | orchestrator | 00:01:17.911 STDOUT terraform:  + config_drive = true 2025-05-05 00:01:17.911625 | orchestrator | 00:01:17.911 STDOUT terraform:  + created = (known after apply) 2025-05-05 00:01:17.911661 | orchestrator | 00:01:17.911 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-05 00:01:17.911696 | orchestrator | 00:01:17.911 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-05 00:01:17.911717 | orchestrator | 00:01:17.911 STDOUT terraform:  + force_delete = false 2025-05-05 00:01:17.911767 | orchestrator | 00:01:17.911 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.911804 | orchestrator | 00:01:17.911 STDOUT terraform:  + image_id = (known after apply) 2025-05-05 00:01:17.911868 | orchestrator | 00:01:17.911 STDOUT terraform:  + image_name = (known after apply) 2025-05-05 00:01:17.911898 | orchestrator | 00:01:17.911 STDOUT terraform:  + key_pair = "testbed" 2025-05-05 00:01:17.911905 | orchestrator | 00:01:17.911 STDOUT terraform:  + name = "testbed-node-5" 2025-05-05 00:01:17.911924 | orchestrator | 00:01:17.911 STDOUT terraform:  + power_state = "active" 2025-05-05 00:01:17.911960 | orchestrator | 00:01:17.911 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.911994 | orchestrator | 00:01:17.911 STDOUT terraform:  + security_groups = (known after apply) 2025-05-05 00:01:17.912018 | orchestrator | 00:01:17.911 STDOUT terraform:  + stop_before_destroy = false 2025-05-05 00:01:17.912053 | orchestrator | 00:01:17.912 STDOUT terraform:  + updated = (known after apply) 2025-05-05 00:01:17.912103 | orchestrator | 00:01:17.912 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-05 00:01:17.912121 | orchestrator | 00:01:17.912 STDOUT terraform:  + block_device { 2025-05-05 00:01:17.912146 | orchestrator | 00:01:17.912 STDOUT terraform:  + boot_index = 0 2025-05-05 00:01:17.912174 | orchestrator | 00:01:17.912 STDOUT terraform:  + delete_on_termination = false 2025-05-05 00:01:17.912224 | orchestrator | 00:01:17.912 STDOUT terraform:  + destination_type = "volume" 2025-05-05 00:01:17.912255 | orchestrator | 00:01:17.912 STDOUT terraform:  + multiattach = false 2025-05-05 00:01:17.912285 | orchestrator | 00:01:17.912 STDOUT terraform:  + source_type = "volume" 2025-05-05 00:01:17.912324 | orchestrator | 00:01:17.912 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.912341 | orchestrator | 00:01:17.912 STDOUT terraform:  } 2025-05-05 00:01:17.912356 | orchestrator | 00:01:17.912 STDOUT terraform:  + network { 2025-05-05 00:01:17.912377 | orchestrator | 00:01:17.912 STDOUT terraform:  + access_network = false 2025-05-05 00:01:17.912408 | orchestrator | 00:01:17.912 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-05 00:01:17.912438 | orchestrator | 00:01:17.912 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-05 00:01:17.912472 | orchestrator | 00:01:17.912 STDOUT terraform:  + mac = (known after apply) 2025-05-05 00:01:17.912506 | orchestrator | 00:01:17.912 STDOUT terraform:  + name = (known after apply) 2025-05-05 00:01:17.912534 | orchestrator | 00:01:17.912 STDOUT terraform:  + port = (known after apply) 2025-05-05 00:01:17.912566 | orchestrator | 00:01:17.912 STDOUT terraform:  + uuid = (known after apply) 2025-05-05 00:01:17.912573 | orchestrator | 00:01:17.912 STDOUT terraform:  } 2025-05-05 00:01:17.912592 | orchestrator | 00:01:17.912 STDOUT terraform:  } 2025-05-05 00:01:17.912650 | orchestrator | 00:01:17.912 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-05 00:01:17.912659 | orchestrator | 00:01:17.912 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-05 00:01:17.912692 | orchestrator | 00:01:17.912 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-05 00:01:17.912721 | orchestrator | 00:01:17.912 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.912735 | orchestrator | 00:01:17.912 STDOUT terraform:  + name = "testbed" 2025-05-05 00:01:17.912763 | orchestrator | 00:01:17.912 STDOUT terraform:  + private_key = (sensitive value) 2025-05-05 00:01:17.912791 | orchestrator | 00:01:17.912 STDOUT terraform:  + public_key = (known after apply) 2025-05-05 00:01:17.912855 | orchestrator | 00:01:17.912 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.912863 | orchestrator | 00:01:17.912 STDOUT terraform:  + user_id = (known after apply) 2025-05-05 00:01:17.912869 | orchestrator | 00:01:17.912 STDOUT terraform:  } 2025-05-05 00:01:17.912920 | orchestrator | 00:01:17.912 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-05 00:01:17.912968 | orchestrator | 00:01:17.912 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.912996 | orchestrator | 00:01:17.912 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.913032 | orchestrator | 00:01:17.912 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.913054 | orchestrator | 00:01:17.913 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.913087 | orchestrator | 00:01:17.913 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.913114 | orchestrator | 00:01:17.913 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.913121 | orchestrator | 00:01:17.913 STDOUT terraform:  } 2025-05-05 00:01:17.913172 | orchestrator | 00:01:17.913 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-05 00:01:17.913225 | orchestrator | 00:01:17.913 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.913253 | orchestrator | 00:01:17.913 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.913287 | orchestrator | 00:01:17.913 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.913313 | orchestrator | 00:01:17.913 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.913341 | orchestrator | 00:01:17.913 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.913377 | orchestrator | 00:01:17.913 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.913385 | orchestrator | 00:01:17.913 STDOUT terraform:  } 2025-05-05 00:01:17.913432 | orchestrator | 00:01:17.913 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-05 00:01:17.913481 | orchestrator | 00:01:17.913 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.913510 | orchestrator | 00:01:17.913 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.913539 | orchestrator | 00:01:17.913 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.913568 | orchestrator | 00:01:17.913 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.913596 | orchestrator | 00:01:17.913 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.913624 | orchestrator | 00:01:17.913 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.913636 | orchestrator | 00:01:17.913 STDOUT terraform:  } 2025-05-05 00:01:17.913683 | orchestrator | 00:01:17.913 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-05 00:01:17.913731 | orchestrator | 00:01:17.913 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.913760 | orchestrator | 00:01:17.913 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.913789 | orchestrator | 00:01:17.913 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.913830 | orchestrator | 00:01:17.913 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.913858 | orchestrator | 00:01:17.913 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.913893 | orchestrator | 00:01:17.913 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.913942 | orchestrator | 00:01:17.913 STDOUT terraform:  } 2025-05-05 00:01:17.913967 | orchestrator | 00:01:17.913 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-05 00:01:17.914040 | orchestrator | 00:01:17.913 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.914063 | orchestrator | 00:01:17.914 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.914093 | orchestrator | 00:01:17.914 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.914122 | orchestrator | 00:01:17.914 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.914153 | orchestrator | 00:01:17.914 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.914181 | orchestrator | 00:01:17.914 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.914188 | orchestrator | 00:01:17.914 STDOUT terraform:  } 2025-05-05 00:01:17.914240 | orchestrator | 00:01:17.914 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-05 00:01:17.914289 | orchestrator | 00:01:17.914 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.914316 | orchestrator | 00:01:17.914 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.914344 | orchestrator | 00:01:17.914 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.914374 | orchestrator | 00:01:17.914 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.914404 | orchestrator | 00:01:17.914 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.914433 | orchestrator | 00:01:17.914 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.914441 | orchestrator | 00:01:17.914 STDOUT terraform:  } 2025-05-05 00:01:17.914493 | orchestrator | 00:01:17.914 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-05 00:01:17.914542 | orchestrator | 00:01:17.914 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.914574 | orchestrator | 00:01:17.914 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.914603 | orchestrator | 00:01:17.914 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.914631 | orchestrator | 00:01:17.914 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.914656 | orchestrator | 00:01:17.914 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.914685 | orchestrator | 00:01:17.914 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.914693 | orchestrator | 00:01:17.914 STDOUT terraform:  } 2025-05-05 00:01:17.914743 | orchestrator | 00:01:17.914 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-05 00:01:17.914791 | orchestrator | 00:01:17.914 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.914844 | orchestrator | 00:01:17.914 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.914881 | orchestrator | 00:01:17.914 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.914909 | orchestrator | 00:01:17.914 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.914946 | orchestrator | 00:01:17.914 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.914967 | orchestrator | 00:01:17.914 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.915024 | orchestrator | 00:01:17.914 STDOUT terraform:  } 2025-05-05 00:01:17.915032 | orchestrator | 00:01:17.914 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-05 00:01:17.915074 | orchestrator | 00:01:17.915 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.915104 | orchestrator | 00:01:17.915 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.915133 | orchestrator | 00:01:17.915 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.915160 | orchestrator | 00:01:17.915 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.915190 | orchestrator | 00:01:17.915 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.915219 | orchestrator | 00:01:17.915 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.915227 | orchestrator | 00:01:17.915 STDOUT terraform:  } 2025-05-05 00:01:17.915279 | orchestrator | 00:01:17.915 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-05-05 00:01:17.915329 | orchestrator | 00:01:17.915 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.915358 | orchestrator | 00:01:17.915 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.915386 | orchestrator | 00:01:17.915 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.915415 | orchestrator | 00:01:17.915 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.915445 | orchestrator | 00:01:17.915 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.915477 | orchestrator | 00:01:17.915 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.915485 | orchestrator | 00:01:17.915 STDOUT terraform:  } 2025-05-05 00:01:17.915541 | orchestrator | 00:01:17.915 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-05-05 00:01:17.915589 | orchestrator | 00:01:17.915 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.915614 | orchestrator | 00:01:17.915 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.915662 | orchestrator | 00:01:17.915 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.915691 | orchestrator | 00:01:17.915 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.915721 | orchestrator | 00:01:17.915 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.915749 | orchestrator | 00:01:17.915 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.915758 | orchestrator | 00:01:17.915 STDOUT terraform:  } 2025-05-05 00:01:17.915812 | orchestrator | 00:01:17.915 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-05-05 00:01:17.915874 | orchestrator | 00:01:17.915 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.915904 | orchestrator | 00:01:17.915 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.915932 | orchestrator | 00:01:17.915 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.915961 | orchestrator | 00:01:17.915 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.916082 | orchestrator | 00:01:17.915 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.916117 | orchestrator | 00:01:17.915 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.916123 | orchestrator | 00:01:17.916 STDOUT terraform:  } 2025-05-05 00:01:17.916128 | orchestrator | 00:01:17.916 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-05-05 00:01:17.916137 | orchestrator | 00:01:17.916 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.916171 | orchestrator | 00:01:17.916 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.916179 | orchestrator | 00:01:17.916 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.916207 | orchestrator | 00:01:17.916 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.916236 | orchestrator | 00:01:17.916 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.916264 | orchestrator | 00:01:17.916 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.916272 | orchestrator | 00:01:17.916 STDOUT terraform:  } 2025-05-05 00:01:17.916324 | orchestrator | 00:01:17.916 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-05-05 00:01:17.916406 | orchestrator | 00:01:17.916 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.916445 | orchestrator | 00:01:17.916 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.916475 | orchestrator | 00:01:17.916 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.916520 | orchestrator | 00:01:17.916 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.916529 | orchestrator | 00:01:17.916 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.916574 | orchestrator | 00:01:17.916 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.916586 | orchestrator | 00:01:17.916 STDOUT terraform:  } 2025-05-05 00:01:17.916593 | orchestrator | 00:01:17.916 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-05-05 00:01:17.916630 | orchestrator | 00:01:17.916 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.916656 | orchestrator | 00:01:17.916 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.916689 | orchestrator | 00:01:17.916 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.916726 | orchestrator | 00:01:17.916 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.916755 | orchestrator | 00:01:17.916 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.916797 | orchestrator | 00:01:17.916 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.916867 | orchestrator | 00:01:17.916 STDOUT terraform:  } 2025-05-05 00:01:17.916875 | orchestrator | 00:01:17.916 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-05-05 00:01:17.916914 | orchestrator | 00:01:17.916 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.916942 | orchestrator | 00:01:17.916 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.916971 | orchestrator | 00:01:17.916 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.917000 | orchestrator | 00:01:17.916 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.917030 | orchestrator | 00:01:17.916 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.917061 | orchestrator | 00:01:17.917 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.917068 | orchestrator | 00:01:17.917 STDOUT terraform:  } 2025-05-05 00:01:17.917120 | orchestrator | 00:01:17.917 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-05-05 00:01:17.917170 | orchestrator | 00:01:17.917 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.917200 | orchestrator | 00:01:17.917 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.917228 | orchestrator | 00:01:17.917 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.917265 | orchestrator | 00:01:17.917 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.917308 | orchestrator | 00:01:17.917 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.917317 | orchestrator | 00:01:17.917 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.917323 | orchestrator | 00:01:17.917 STDOUT terraform:  } 2025-05-05 00:01:17.917382 | orchestrator | 00:01:17.917 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-05-05 00:01:17.917430 | orchestrator | 00:01:17.917 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-05 00:01:17.917458 | orchestrator | 00:01:17.917 STDOUT terraform:  + device = (known after apply) 2025-05-05 00:01:17.917487 | orchestrator | 00:01:17.917 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.917516 | orchestrator | 00:01:17.917 STDOUT terraform:  + instance_id = (known after apply) 2025-05-05 00:01:17.917548 | orchestrator | 00:01:17.917 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.917575 | orchestrator | 00:01:17.917 STDOUT terraform:  + volume_id = (known after apply) 2025-05-05 00:01:17.917582 | orchestrator | 00:01:17.917 STDOUT terraform:  } 2025-05-05 00:01:17.917643 | orchestrator | 00:01:17.917 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-05 00:01:17.917700 | orchestrator | 00:01:17.917 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-05 00:01:17.917729 | orchestrator | 00:01:17.917 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-05 00:01:17.917757 | orchestrator | 00:01:17.917 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-05 00:01:17.917786 | orchestrator | 00:01:17.917 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.917828 | orchestrator | 00:01:17.917 STDOUT terraform:  + port_id = (known after apply) 2025-05-05 00:01:17.917872 | orchestrator | 00:01:17.917 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.917879 | orchestrator | 00:01:17.917 STDOUT terraform:  } 2025-05-05 00:01:17.917927 | orchestrator | 00:01:17.917 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-05 00:01:17.917975 | orchestrator | 00:01:17.917 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-05 00:01:17.918002 | orchestrator | 00:01:17.917 STDOUT terraform:  + address = (known after apply) 2025-05-05 00:01:17.918043 | orchestrator | 00:01:17.917 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.918068 | orchestrator | 00:01:17.918 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-05 00:01:17.918094 | orchestrator | 00:01:17.918 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.918120 | orchestrator | 00:01:17.918 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-05 00:01:17.918147 | orchestrator | 00:01:17.918 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.918187 | orchestrator | 00:01:17.918 STDOUT terraform:  + pool = "public" 2025-05-05 00:01:17.918213 | orchestrator | 00:01:17.918 STDOUT terraform:  + port_id = (known after apply) 2025-05-05 00:01:17.918222 | orchestrator | 00:01:17.918 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.918241 | orchestrator | 00:01:17.918 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.918266 | orchestrator | 00:01:17.918 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.918274 | orchestrator | 00:01:17.918 STDOUT terraform:  } 2025-05-05 00:01:17.918323 | orchestrator | 00:01:17.918 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-05 00:01:17.918367 | orchestrator | 00:01:17.918 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-05 00:01:17.918405 | orchestrator | 00:01:17.918 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.918442 | orchestrator | 00:01:17.918 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.918466 | orchestrator | 00:01:17.918 STDOUT terraform:  + availability_zone_hints = [ 2025-05-05 00:01:17.918474 | orchestrator | 00:01:17.918 STDOUT terraform:  + "nova", 2025-05-05 00:01:17.918493 | orchestrator | 00:01:17.918 STDOUT terraform:  ] 2025-05-05 00:01:17.918532 | orchestrator | 00:01:17.918 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-05 00:01:17.918568 | orchestrator | 00:01:17.918 STDOUT terraform:  + external = (known after apply) 2025-05-05 00:01:17.918607 | orchestrator | 00:01:17.918 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.918703 | orchestrator | 00:01:17.918 STDOUT terraform:  + mtu = (known after apply) 2025-05-05 00:01:17.918743 | orchestrator | 00:01:17.918 STDOUT terraform:  + name = "net-testbed-management" 2025-05-05 00:01:17.918779 | orchestrator | 00:01:17.918 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.919069 | orchestrator | 00:01:17.918 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.919202 | orchestrator | 00:01:17.918 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.919226 | orchestrator | 00:01:17.918 STDOUT terraform:  + shared = (known after apply) 2025-05-05 00:01:17.919242 | orchestrator | 00:01:17.918 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.919256 | orchestrator | 00:01:17.918 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-05 00:01:17.919272 | orchestrator | 00:01:17.918 STDOUT terraform:  + segments (known after apply) 2025-05-05 00:01:17.919288 | orchestrator | 00:01:17.918 STDOUT terraform:  } 2025-05-05 00:01:17.919310 | orchestrator | 00:01:17.918 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-05 00:01:17.919327 | orchestrator | 00:01:17.919 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-05 00:01:17.919342 | orchestrator | 00:01:17.919 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.919357 | orchestrator | 00:01:17.919 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-05 00:01:17.919371 | orchestrator | 00:01:17.919 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-05 00:01:17.919385 | orchestrator | 00:01:17.919 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.919399 | orchestrator | 00:01:17.919 STDOUT terraform:  + device_id = (known after apply) 2025-05-05 00:01:17.919413 | orchestrator | 00:01:17.919 STDOUT terraform:  + device_owner = (known after apply) 2025-05-05 00:01:17.919427 | orchestrator | 00:01:17.919 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-05 00:01:17.919445 | orchestrator | 00:01:17.919 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.919460 | orchestrator | 00:01:17.919 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.919474 | orchestrator | 00:01:17.919 STDOUT terraform:  + mac_address = (known after apply) 2025-05-05 00:01:17.919510 | orchestrator | 00:01:17.919 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.919555 | orchestrator | 00:01:17.919 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.919610 | orchestrator | 00:01:17.919 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.919627 | orchestrator | 00:01:17.919 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.919645 | orchestrator | 00:01:17.919 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-05 00:01:17.919700 | orchestrator | 00:01:17.919 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.919717 | orchestrator | 00:01:17.919 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.919737 | orchestrator | 00:01:17.919 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-05 00:01:17.919782 | orchestrator | 00:01:17.919 STDOUT terraform:  } 2025-05-05 00:01:17.919799 | orchestrator | 00:01:17.919 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.919838 | orchestrator | 00:01:17.919 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-05 00:01:17.919857 | orchestrator | 00:01:17.919 STDOUT terraform:  } 2025-05-05 00:01:17.919877 | orchestrator | 00:01:17.919 STDOUT terraform:  + binding (known after apply) 2025-05-05 00:01:17.919922 | orchestrator | 00:01:17.919 STDOUT terraform:  + fixed_ip { 2025-05-05 00:01:17.919941 | orchestrator | 00:01:17.919 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-05 00:01:17.919956 | orchestrator | 00:01:17.919 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.919970 | orchestrator | 00:01:17.919 STDOUT terraform:  } 2025-05-05 00:01:17.919986 | orchestrator | 00:01:17.919 STDOUT terraform:  } 2025-05-05 00:01:17.920005 | orchestrator | 00:01:17.919 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-05 00:01:17.920021 | orchestrator | 00:01:17.919 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-05 00:01:17.920035 | orchestrator | 00:01:17.919 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.920054 | orchestrator | 00:01:17.919 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-05 00:01:17.920097 | orchestrator | 00:01:17.920 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-05 00:01:17.920118 | orchestrator | 00:01:17.920 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.920194 | orchestrator | 00:01:17.920 STDOUT terraform:  + device_id = (known after apply) 2025-05-05 00:01:17.920216 | orchestrator | 00:01:17.920 STDOUT terraform:  + device_owner = (known after apply) 2025-05-05 00:01:17.920231 | orchestrator | 00:01:17.920 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-05 00:01:17.920249 | orchestrator | 00:01:17.920 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.920267 | orchestrator | 00:01:17.920 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.920314 | orchestrator | 00:01:17.920 STDOUT terraform:  + mac_address = (known after apply) 2025-05-05 00:01:17.920333 | orchestrator | 00:01:17.920 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.920362 | orchestrator | 00:01:17.920 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.920415 | orchestrator | 00:01:17.920 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.920435 | orchestrator | 00:01:17.920 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.920483 | orchestrator | 00:01:17.920 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-05 00:01:17.920503 | orchestrator | 00:01:17.920 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.920522 | orchestrator | 00:01:17.920 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.920566 | orchestrator | 00:01:17.920 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-05 00:01:17.920584 | orchestrator | 00:01:17.920 STDOUT terraform:  } 2025-05-05 00:01:17.920602 | orchestrator | 00:01:17.920 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.920621 | orchestrator | 00:01:17.920 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-05 00:01:17.920636 | orchestrator | 00:01:17.920 STDOUT terraform:  } 2025-05-05 00:01:17.920656 | orchestrator | 00:01:17.920 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.920674 | orchestrator | 00:01:17.920 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-05 00:01:17.920689 | orchestrator | 00:01:17.920 STDOUT terraform:  } 2025-05-05 00:01:17.920708 | orchestrator | 00:01:17.920 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.920726 | orchestrator | 00:01:17.920 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-05 00:01:17.920741 | orchestrator | 00:01:17.920 STDOUT terraform:  } 2025-05-05 00:01:17.920766 | orchestrator | 00:01:17.920 STDOUT terraform:  + binding (known after apply) 2025-05-05 00:01:17.920782 | orchestrator | 00:01:17.920 STDOUT terraform:  + fixed_ip { 2025-05-05 00:01:17.920804 | orchestrator | 00:01:17.920 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-05 00:01:17.920840 | orchestrator | 00:01:17.920 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.920859 | orchestrator | 00:01:17.920 STDOUT terraform:  } 2025-05-05 00:01:17.920900 | orchestrator | 00:01:17.920 STDOUT terraform:  } 2025-05-05 00:01:17.920922 | orchestrator | 00:01:17.920 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-05 00:01:17.920943 | orchestrator | 00:01:17.920 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-05 00:01:17.920962 | orchestrator | 00:01:17.920 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.921016 | orchestrator | 00:01:17.920 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-05 00:01:17.921037 | orchestrator | 00:01:17.920 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-05 00:01:17.921093 | orchestrator | 00:01:17.921 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.921114 | orchestrator | 00:01:17.921 STDOUT terraform:  + device_id = (known after apply) 2025-05-05 00:01:17.921160 | orchestrator | 00:01:17.921 STDOUT terraform:  + device_owner = (known after apply) 2025-05-05 00:01:17.921189 | orchestrator | 00:01:17.921 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-05 00:01:17.921232 | orchestrator | 00:01:17.921 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.921252 | orchestrator | 00:01:17.921 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.921304 | orchestrator | 00:01:17.921 STDOUT terraform:  + mac_address = (known after apply) 2025-05-05 00:01:17.921324 | orchestrator | 00:01:17.921 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.921376 | orchestrator | 00:01:17.921 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.921395 | orchestrator | 00:01:17.921 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.921447 | orchestrator | 00:01:17.921 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.921467 | orchestrator | 00:01:17.921 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-05 00:01:17.921520 | orchestrator | 00:01:17.921 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.921539 | orchestrator | 00:01:17.921 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.921558 | orchestrator | 00:01:17.921 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-05 00:01:17.921578 | orchestrator | 00:01:17.921 STDOUT terraform:  } 2025-05-05 00:01:17.921621 | orchestrator | 00:01:17.921 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.921642 | orchestrator | 00:01:17.921 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-05 00:01:17.921684 | orchestrator | 00:01:17.921 STDOUT terraform:  } 2025-05-05 00:01:17.921701 | orchestrator | 00:01:17.921 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.921720 | orchestrator | 00:01:17.921 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-05 00:01:17.921735 | orchestrator | 00:01:17.921 STDOUT terraform:  } 2025-05-05 00:01:17.921750 | orchestrator | 00:01:17.921 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.921764 | orchestrator | 00:01:17.921 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-05 00:01:17.921783 | orchestrator | 00:01:17.921 STDOUT terraform:  } 2025-05-05 00:01:17.921884 | orchestrator | 00:01:17.921 STDOUT terraform:  + binding (known after apply) 2025-05-05 00:01:17.921904 | orchestrator | 00:01:17.921 STDOUT terraform:  + fixed_ip { 2025-05-05 00:01:17.921919 | orchestrator | 00:01:17.921 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-05 00:01:17.921938 | orchestrator | 00:01:17.921 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.921953 | orchestrator | 00:01:17.921 STDOUT terraform:  } 2025-05-05 00:01:17.921967 | orchestrator | 00:01:17.921 STDOUT terraform:  } 2025-05-05 00:01:17.921982 | orchestrator | 00:01:17.921 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-05 00:01:17.922000 | orchestrator | 00:01:17.921 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-05 00:01:17.922071 | orchestrator | 00:01:17.921 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.922095 | orchestrator | 00:01:17.921 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-05 00:01:17.922145 | orchestrator | 00:01:17.922 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-05 00:01:17.922162 | orchestrator | 00:01:17.922 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.922180 | orchestrator | 00:01:17.922 STDOUT terraform:  + device_id = (known after apply) 2025-05-05 00:01:17.922195 | orchestrator | 00:01:17.922 STDOUT terraform:  + device_owner = (known after apply) 2025-05-05 00:01:17.922213 | orchestrator | 00:01:17.922 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-05 00:01:17.922231 | orchestrator | 00:01:17.922 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.922283 | orchestrator | 00:01:17.922 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.922301 | orchestrator | 00:01:17.922 STDOUT terraform:  + mac_address = (known after apply) 2025-05-05 00:01:17.922318 | orchestrator | 00:01:17.922 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.922379 | orchestrator | 00:01:17.922 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.922397 | orchestrator | 00:01:17.922 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.922446 | orchestrator | 00:01:17.922 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.922464 | orchestrator | 00:01:17.922 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-05 00:01:17.922508 | orchestrator | 00:01:17.922 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.922547 | orchestrator | 00:01:17.922 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.922566 | orchestrator | 00:01:17.922 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-05 00:01:17.922604 | orchestrator | 00:01:17.922 STDOUT terraform:  } 2025-05-05 00:01:17.922618 | orchestrator | 00:01:17.922 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.922634 | orchestrator | 00:01:17.922 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-05 00:01:17.922648 | orchestrator | 00:01:17.922 STDOUT terraform:  } 2025-05-05 00:01:17.922662 | orchestrator | 00:01:17.922 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.922678 | orchestrator | 00:01:17.922 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-05 00:01:17.922692 | orchestrator | 00:01:17.922 STDOUT terraform:  } 2025-05-05 00:01:17.922705 | orchestrator | 00:01:17.922 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.922721 | orchestrator | 00:01:17.922 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-05 00:01:17.922735 | orchestrator | 00:01:17.922 STDOUT terraform:  } 2025-05-05 00:01:17.922750 | orchestrator | 00:01:17.922 STDOUT terraform:  + binding (known after apply) 2025-05-05 00:01:17.922764 | orchestrator | 00:01:17.922 STDOUT terraform:  + fixed_ip { 2025-05-05 00:01:17.922786 | orchestrator | 00:01:17.922 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-05 00:01:17.922807 | orchestrator | 00:01:17.922 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.922848 | orchestrator | 00:01:17.922 STDOUT terraform:  } 2025-05-05 00:01:17.922862 | orchestrator | 00:01:17.922 STDOUT terraform:  } 2025-05-05 00:01:17.922877 | orchestrator | 00:01:17.922 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-05 00:01:17.922922 | orchestrator | 00:01:17.922 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-05 00:01:17.922976 | orchestrator | 00:01:17.922 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.922994 | orchestrator | 00:01:17.922 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-05 00:01:17.923035 | orchestrator | 00:01:17.922 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-05 00:01:17.923155 | orchestrator | 00:01:17.923 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.923198 | orchestrator | 00:01:17.923 STDOUT terraform:  + device_id = (known after apply) 2025-05-05 00:01:17.923216 | orchestrator | 00:01:17.923 STDOUT terraform:  + device_owner = (known after apply) 2025-05-05 00:01:17.923275 | orchestrator | 00:01:17.923 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-05 00:01:17.923294 | orchestrator | 00:01:17.923 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.923349 | orchestrator | 00:01:17.923 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.923367 | orchestrator | 00:01:17.923 STDOUT terraform:  + mac_address = (known after apply) 2025-05-05 00:01:17.923420 | orchestrator | 00:01:17.923 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.923440 | orchestrator | 00:01:17.923 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.923492 | orchestrator | 00:01:17.923 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.923511 | orchestrator | 00:01:17.923 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.923565 | orchestrator | 00:01:17.923 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-05 00:01:17.923583 | orchestrator | 00:01:17.923 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.923600 | orchestrator | 00:01:17.923 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.923617 | orchestrator | 00:01:17.923 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-05 00:01:17.923666 | orchestrator | 00:01:17.923 STDOUT terraform:  } 2025-05-05 00:01:17.923682 | orchestrator | 00:01:17.923 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.923698 | orchestrator | 00:01:17.923 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-05 00:01:17.924547 | orchestrator | 00:01:17.923 STDOUT terraform:  } 2025-05-05 00:01:17.924581 | orchestrator | 00:01:17.923 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.924601 | orchestrator | 00:01:17.923 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-05 00:01:17.924617 | orchestrator | 00:01:17.924 STDOUT terraform:  } 2025-05-05 00:01:17.924640 | orchestrator | 00:01:17.924 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.924656 | orchestrator | 00:01:17.924 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-05 00:01:17.924672 | orchestrator | 00:01:17.924 STDOUT terraform:  } 2025-05-05 00:01:17.924688 | orchestrator | 00:01:17.924 STDOUT terraform:  + binding (known after apply) 2025-05-05 00:01:17.924709 | orchestrator | 00:01:17.924 STDOUT terraform:  + fixed_ip { 2025-05-05 00:01:17.924724 | orchestrator | 00:01:17.924 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-05 00:01:17.924736 | orchestrator | 00:01:17.924 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.924749 | orchestrator | 00:01:17.924 STDOUT terraform:  } 2025-05-05 00:01:17.924765 | orchestrator | 00:01:17.924 STDOUT terraform:  } 2025-05-05 00:01:17.924781 | orchestrator | 00:01:17.924 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-05 00:01:17.924870 | orchestrator | 00:01:17.924 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-05 00:01:17.924891 | orchestrator | 00:01:17.924 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.924947 | orchestrator | 00:01:17.924 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-05 00:01:17.925004 | orchestrator | 00:01:17.924 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-05 00:01:17.925023 | orchestrator | 00:01:17.924 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.925077 | orchestrator | 00:01:17.924 STDOUT terraform:  + device_id = (known after apply) 2025-05-05 00:01:17.925096 | orchestrator | 00:01:17.925 STDOUT terraform:  + device_owner = (known after apply) 2025-05-05 00:01:17.925151 | orchestrator | 00:01:17.925 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-05 00:01:17.925215 | orchestrator | 00:01:17.925 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.925230 | orchestrator | 00:01:17.925 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.925247 | orchestrator | 00:01:17.925 STDOUT terraform:  + mac_address = (known after apply) 2025-05-05 00:01:17.925260 | orchestrator | 00:01:17.925 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.925275 | orchestrator | 00:01:17.925 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.925304 | orchestrator | 00:01:17.925 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.925347 | orchestrator | 00:01:17.925 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.925370 | orchestrator | 00:01:17.925 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-05 00:01:17.925421 | orchestrator | 00:01:17.925 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.925440 | orchestrator | 00:01:17.925 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.925456 | orchestrator | 00:01:17.925 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-05 00:01:17.925479 | orchestrator | 00:01:17.925 STDOUT terraform:  } 2025-05-05 00:01:17.925514 | orchestrator | 00:01:17.925 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.925528 | orchestrator | 00:01:17.925 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-05 00:01:17.925544 | orchestrator | 00:01:17.925 STDOUT terraform:  } 2025-05-05 00:01:17.925557 | orchestrator | 00:01:17.925 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.925573 | orchestrator | 00:01:17.925 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-05 00:01:17.925586 | orchestrator | 00:01:17.925 STDOUT terraform:  } 2025-05-05 00:01:17.925602 | orchestrator | 00:01:17.925 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.925621 | orchestrator | 00:01:17.925 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-05 00:01:17.925661 | orchestrator | 00:01:17.925 STDOUT terraform:  } 2025-05-05 00:01:17.925680 | orchestrator | 00:01:17.925 STDOUT terraform:  + binding (known after apply) 2025-05-05 00:01:17.925721 | orchestrator | 00:01:17.925 STDOUT terraform:  + fixed_ip { 2025-05-05 00:01:17.925736 | orchestrator | 00:01:17.925 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-05 00:01:17.925753 | orchestrator | 00:01:17.925 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.925791 | orchestrator | 00:01:17.925 STDOUT terraform:  } 2025-05-05 00:01:17.925805 | orchestrator | 00:01:17.925 STDOUT terraform:  } 2025-05-05 00:01:17.925843 | orchestrator | 00:01:17.925 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-05 00:01:17.925880 | orchestrator | 00:01:17.925 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-05 00:01:17.925899 | orchestrator | 00:01:17.925 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.925950 | orchestrator | 00:01:17.925 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-05 00:01:17.925969 | orchestrator | 00:01:17.925 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-05 00:01:17.926069 | orchestrator | 00:01:17.925 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.926093 | orchestrator | 00:01:17.925 STDOUT terraform:  + device_id = (known after apply) 2025-05-05 00:01:17.926106 | orchestrator | 00:01:17.926 STDOUT terraform:  + device_owner = (known after apply) 2025-05-05 00:01:17.926123 | orchestrator | 00:01:17.926 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-05 00:01:17.926164 | orchestrator | 00:01:17.926 STDOUT terraform:  + dns_name = (known after apply) 2025-05-05 00:01:17.926182 | orchestrator | 00:01:17.926 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.926230 | orchestrator | 00:01:17.926 STDOUT terraform:  + mac_address = (known after apply) 2025-05-05 00:01:17.926270 | orchestrator | 00:01:17.926 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.926310 | orchestrator | 00:01:17.926 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-05 00:01:17.926353 | orchestrator | 00:01:17.926 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-05 00:01:17.926379 | orchestrator | 00:01:17.926 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.926422 | orchestrator | 00:01:17.926 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-05 00:01:17.926440 | orchestrator | 00:01:17.926 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.926457 | orchestrator | 00:01:17.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.926480 | orchestrator | 00:01:17.926 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-05 00:01:17.926497 | orchestrator | 00:01:17.926 STDOUT terraform:  } 2025-05-05 00:01:17.926513 | orchestrator | 00:01:17.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.926554 | orchestrator | 00:01:17.926 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-05 00:01:17.926568 | orchestrator | 00:01:17.926 STDOUT terraform:  } 2025-05-05 00:01:17.926584 | orchestrator | 00:01:17.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.926601 | orchestrator | 00:01:17.926 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-05 00:01:17.926640 | orchestrator | 00:01:17.926 STDOUT terraform:  } 2025-05-05 00:01:17.926659 | orchestrator | 00:01:17.926 STDOUT terraform:  + allowed_address_pairs { 2025-05-05 00:01:17.926673 | orchestrator | 00:01:17.926 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-05 00:01:17.926690 | orchestrator | 00:01:17.926 STDOUT terraform:  } 2025-05-05 00:01:17.926703 | orchestrator | 00:01:17.926 STDOUT terraform:  + binding (known after apply) 2025-05-05 00:01:17.926720 | orchestrator | 00:01:17.926 STDOUT terraform:  + fixed_ip { 2025-05-05 00:01:17.926763 | orchestrator | 00:01:17.926 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-05 00:01:17.926787 | orchestrator | 00:01:17.926 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.926848 | orchestrator | 00:01:17.926 STDOUT terraform:  } 2025-05-05 00:01:17.926865 | orchestrator | 00:01:17.926 STDOUT terraform:  } 2025-05-05 00:01:17.926881 | orchestrator | 00:01:17.926 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-05 00:01:17.926904 | orchestrator | 00:01:17.926 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-05 00:01:17.926921 | orchestrator | 00:01:17.926 STDOUT terraform:  + force_destroy = false 2025-05-05 00:01:17.926938 | orchestrator | 00:01:17.926 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.926980 | orchestrator | 00:01:17.926 STDOUT terraform:  + port_id = (known after apply) 2025-05-05 00:01:17.926998 | orchestrator | 00:01:17.926 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.927039 | orchestrator | 00:01:17.926 STDOUT terraform:  + router_id = (known after apply) 2025-05-05 00:01:17.927060 | orchestrator | 00:01:17.927 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-05 00:01:17.927109 | orchestrator | 00:01:17.927 STDOUT terraform:  } 2025-05-05 00:01:17.927127 | orchestrator | 00:01:17.927 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-05 00:01:17.927144 | orchestrator | 00:01:17.927 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-05 00:01:17.927169 | orchestrator | 00:01:17.927 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-05 00:01:17.927221 | orchestrator | 00:01:17.927 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.927239 | orchestrator | 00:01:17.927 STDOUT terraform:  + availability_zone_hints = [ 2025-05-05 00:01:17.927255 | orchestrator | 00:01:17.927 STDOUT terraform:  + "nova", 2025-05-05 00:01:17.927272 | orchestrator | 00:01:17.927 STDOUT terraform:  ] 2025-05-05 00:01:17.927315 | orchestrator | 00:01:17.927 STDOUT terraform:  + distributed = (known after apply) 2025-05-05 00:01:17.927383 | orchestrator | 00:01:17.927 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-05 00:01:17.927432 | orchestrator | 00:01:17.927 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-05 00:01:17.927451 | orchestrator | 00:01:17.927 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.927494 | orchestrator | 00:01:17.927 STDOUT terraform:  + name = "testbed" 2025-05-05 00:01:17.927513 | orchestrator | 00:01:17.927 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.927530 | orchestrator | 00:01:17.927 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.927546 | orchestrator | 00:01:17.927 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-05 00:01:17.927563 | orchestrator | 00:01:17.927 STDOUT terraform:  } 2025-05-05 00:01:17.927623 | orchestrator | 00:01:17.927 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-05 00:01:17.927678 | orchestrator | 00:01:17.927 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-05 00:01:17.927697 | orchestrator | 00:01:17.927 STDOUT terraform:  + description = "ssh" 2025-05-05 00:01:17.927714 | orchestrator | 00:01:17.927 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.927730 | orchestrator | 00:01:17.927 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.927748 | orchestrator | 00:01:17.927 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.927764 | orchestrator | 00:01:17.927 STDOUT terraform:  + port_range_max = 22 2025-05-05 00:01:17.927781 | orchestrator | 00:01:17.927 STDOUT terraform:  + port_range_min = 22 2025-05-05 00:01:17.927798 | orchestrator | 00:01:17.927 STDOUT terraform:  + protocol = "tcp" 2025-05-05 00:01:17.927887 | orchestrator | 00:01:17.927 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.927904 | orchestrator | 00:01:17.927 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.927921 | orchestrator | 00:01:17.927 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-05 00:01:17.927961 | orchestrator | 00:01:17.927 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.927976 | orchestrator | 00:01:17.927 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.928025 | orchestrator | 00:01:17.927 STDOUT terraform:  } 2025-05-05 00:01:17.928041 | orchestrator | 00:01:17.927 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-05 00:01:17.928064 | orchestrator | 00:01:17.928 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-05 00:01:17.928101 | orchestrator | 00:01:17.928 STDOUT terraform:  + description = "wireguard" 2025-05-05 00:01:17.928116 | orchestrator | 00:01:17.928 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.928130 | orchestrator | 00:01:17.928 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.928172 | orchestrator | 00:01:17.928 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.928187 | orchestrator | 00:01:17.928 STDOUT terraform:  + port_range_max = 51820 2025-05-05 00:01:17.928200 | orchestrator | 00:01:17.928 STDOUT terraform:  + port_range_min = 51820 2025-05-05 00:01:17.928215 | orchestrator | 00:01:17.928 STDOUT terraform:  + protocol = "udp" 2025-05-05 00:01:17.928336 | orchestrator | 00:01:17.928 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.928354 | orchestrator | 00:01:17.928 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.928366 | orchestrator | 00:01:17.928 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-05 00:01:17.928378 | orchestrator | 00:01:17.928 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.928392 | orchestrator | 00:01:17.928 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.928404 | orchestrator | 00:01:17.928 STDOUT terraform:  } 2025-05-05 00:01:17.928418 | orchestrator | 00:01:17.928 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-05 00:01:17.928474 | orchestrator | 00:01:17.928 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-05 00:01:17.928530 | orchestrator | 00:01:17.928 STDOUT terraform:  + direction 2025-05-05 00:01:17.928545 | orchestrator | 00:01:17.928 STDOUT terraform:  = "ingress" 2025-05-05 00:01:17.928582 | orchestrator | 00:01:17.928 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.928598 | orchestrator | 00:01:17.928 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.928642 | orchestrator | 00:01:17.928 STDOUT terraform:  + protocol = "tcp" 2025-05-05 00:01:17.928658 | orchestrator | 00:01:17.928 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.928738 | orchestrator | 00:01:17.928 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.928754 | orchestrator | 00:01:17.928 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-05 00:01:17.928828 | orchestrator | 00:01:17.928 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.928841 | orchestrator | 00:01:17.928 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.928851 | orchestrator | 00:01:17.928 STDOUT terraform:  } 2025-05-05 00:01:17.928865 | orchestrator | 00:01:17.928 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-05 00:01:17.928899 | orchestrator | 00:01:17.928 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-05 00:01:17.928928 | orchestrator | 00:01:17.928 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.928968 | orchestrator | 00:01:17.928 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.928984 | orchestrator | 00:01:17.928 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.929020 | orchestrator | 00:01:17.928 STDOUT terraform:  + protocol = "udp" 2025-05-05 00:01:17.929041 | orchestrator | 00:01:17.928 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.929079 | orchestrator | 00:01:17.929 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.929095 | orchestrator | 00:01:17.929 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-05 00:01:17.929140 | orchestrator | 00:01:17.929 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.929154 | orchestrator | 00:01:17.929 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.929202 | orchestrator | 00:01:17.929 STDOUT terraform:  } 2025-05-05 00:01:17.929222 | orchestrator | 00:01:17.929 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-05 00:01:17.929261 | orchestrator | 00:01:17.929 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-05 00:01:17.929281 | orchestrator | 00:01:17.929 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.929295 | orchestrator | 00:01:17.929 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.929334 | orchestrator | 00:01:17.929 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.929349 | orchestrator | 00:01:17.929 STDOUT terraform:  + protocol = "icmp" 2025-05-05 00:01:17.929389 | orchestrator | 00:01:17.929 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.929404 | orchestrator | 00:01:17.929 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.929442 | orchestrator | 00:01:17.929 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-05 00:01:17.929458 | orchestrator | 00:01:17.929 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.929502 | orchestrator | 00:01:17.929 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.929562 | orchestrator | 00:01:17.929 STDOUT terraform:  } 2025-05-05 00:01:17.929577 | orchestrator | 00:01:17.929 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-05 00:01:17.929622 | orchestrator | 00:01:17.929 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-05 00:01:17.929637 | orchestrator | 00:01:17.929 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.929651 | orchestrator | 00:01:17.929 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.929688 | orchestrator | 00:01:17.929 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.929703 | orchestrator | 00:01:17.929 STDOUT terraform:  + protocol = "tcp" 2025-05-05 00:01:17.929740 | orchestrator | 00:01:17.929 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.929760 | orchestrator | 00:01:17.929 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.929796 | orchestrator | 00:01:17.929 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-05 00:01:17.929810 | orchestrator | 00:01:17.929 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.929884 | orchestrator | 00:01:17.929 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.929943 | orchestrator | 00:01:17.929 STDOUT terraform:  } 2025-05-05 00:01:17.929958 | orchestrator | 00:01:17.929 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-05 00:01:17.930000 | orchestrator | 00:01:17.929 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-05 00:01:17.930044 | orchestrator | 00:01:17.929 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.930060 | orchestrator | 00:01:17.930 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.930095 | orchestrator | 00:01:17.930 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.930110 | orchestrator | 00:01:17.930 STDOUT terraform:  + protocol = "udp" 2025-05-05 00:01:17.930142 | orchestrator | 00:01:17.930 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.930176 | orchestrator | 00:01:17.930 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.930190 | orchestrator | 00:01:17.930 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-05 00:01:17.930231 | orchestrator | 00:01:17.930 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.930247 | orchestrator | 00:01:17.930 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.930260 | orchestrator | 00:01:17.930 STDOUT terraform:  } 2025-05-05 00:01:17.930322 | orchestrator | 00:01:17.930 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-05 00:01:17.930376 | orchestrator | 00:01:17.930 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-05 00:01:17.930391 | orchestrator | 00:01:17.930 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.930405 | orchestrator | 00:01:17.930 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.930450 | orchestrator | 00:01:17.930 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.930464 | orchestrator | 00:01:17.930 STDOUT terraform:  + protocol = "icmp" 2025-05-05 00:01:17.930499 | orchestrator | 00:01:17.930 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.930514 | orchestrator | 00:01:17.930 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.930552 | orchestrator | 00:01:17.930 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-05 00:01:17.930567 | orchestrator | 00:01:17.930 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.930613 | orchestrator | 00:01:17.930 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.930671 | orchestrator | 00:01:17.930 STDOUT terraform:  } 2025-05-05 00:01:17.930686 | orchestrator | 00:01:17.930 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-05 00:01:17.930729 | orchestrator | 00:01:17.930 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-05 00:01:17.930744 | orchestrator | 00:01:17.930 STDOUT terraform:  + description = "vrrp" 2025-05-05 00:01:17.930757 | orchestrator | 00:01:17.930 STDOUT terraform:  + direction = "ingress" 2025-05-05 00:01:17.930790 | orchestrator | 00:01:17.930 STDOUT terraform:  + ethertype = "IPv4" 2025-05-05 00:01:17.930840 | orchestrator | 00:01:17.930 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.930884 | orchestrator | 00:01:17.930 STDOUT terraform:  + protocol = "112" 2025-05-05 00:01:17.930900 | orchestrator | 00:01:17.930 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.930942 | orchestrator | 00:01:17.930 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-05 00:01:17.930958 | orchestrator | 00:01:17.930 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-05 00:01:17.930995 | orchestrator | 00:01:17.930 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-05 00:01:17.931010 | orchestrator | 00:01:17.930 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.931055 | orchestrator | 00:01:17.930 STDOUT terraform:  } 2025-05-05 00:01:17.931070 | orchestrator | 00:01:17.930 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-05 00:01:17.931109 | orchestrator | 00:01:17.931 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-05 00:01:17.931124 | orchestrator | 00:01:17.931 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.931168 | orchestrator | 00:01:17.931 STDOUT terraform:  + description = "management security group" 2025-05-05 00:01:17.931183 | orchestrator | 00:01:17.931 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.931224 | orchestrator | 00:01:17.931 STDOUT terraform:  + name = "testbed-management" 2025-05-05 00:01:17.931238 | orchestrator | 00:01:17.931 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.931278 | orchestrator | 00:01:17.931 STDOUT terraform:  + stateful = (known after apply) 2025-05-05 00:01:17.931293 | orchestrator | 00:01:17.931 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.931307 | orchestrator | 00:01:17.931 STDOUT terraform:  } 2025-05-05 00:01:17.931361 | orchestrator | 00:01:17.931 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-05 00:01:17.931410 | orchestrator | 00:01:17.931 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-05 00:01:17.931425 | orchestrator | 00:01:17.931 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.931470 | orchestrator | 00:01:17.931 STDOUT terraform:  + description = "node security group" 2025-05-05 00:01:17.931485 | orchestrator | 00:01:17.931 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.931522 | orchestrator | 00:01:17.931 STDOUT terraform:  + name = "testbed-node" 2025-05-05 00:01:17.931536 | orchestrator | 00:01:17.931 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.931576 | orchestrator | 00:01:17.931 STDOUT terraform:  + stateful = (known after apply) 2025-05-05 00:01:17.931591 | orchestrator | 00:01:17.931 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.931605 | orchestrator | 00:01:17.931 STDOUT terraform:  } 2025-05-05 00:01:17.931656 | orchestrator | 00:01:17.931 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-05 00:01:17.931702 | orchestrator | 00:01:17.931 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-05 00:01:17.931736 | orchestrator | 00:01:17.931 STDOUT terraform:  + all_tags = (known after apply) 2025-05-05 00:01:17.931751 | orchestrator | 00:01:17.931 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-05 00:01:17.931785 | orchestrator | 00:01:17.931 STDOUT terraform:  + dns_nameservers = [ 2025-05-05 00:01:17.931797 | orchestrator | 00:01:17.931 STDOUT terraform:  + "8.8.8.8", 2025-05-05 00:01:17.931810 | orchestrator | 00:01:17.931 STDOUT terraform:  + "9.9.9.9", 2025-05-05 00:01:17.931840 | orchestrator | 00:01:17.931 STDOUT terraform:  ] 2025-05-05 00:01:17.931854 | orchestrator | 00:01:17.931 STDOUT terraform:  + enable_dhcp = true 2025-05-05 00:01:17.931888 | orchestrator | 00:01:17.931 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-05 00:01:17.931904 | orchestrator | 00:01:17.931 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.931937 | orchestrator | 00:01:17.931 STDOUT terraform:  + ip_version = 4 2025-05-05 00:01:17.931953 | orchestrator | 00:01:17.931 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-05 00:01:17.931999 | orchestrator | 00:01:17.931 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-05 00:01:17.932041 | orchestrator | 00:01:17.931 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-05 00:01:17.932055 | orchestrator | 00:01:17.932 STDOUT terraform:  + network_id = (known after apply) 2025-05-05 00:01:17.932091 | orchestrator | 00:01:17.932 STDOUT terraform:  + no_gateway = false 2025-05-05 00:01:17.932106 | orchestrator | 00:01:17.932 STDOUT terraform:  + region = (known after apply) 2025-05-05 00:01:17.932149 | orchestrator | 00:01:17.932 STDOUT terraform:  + service_types = (known after apply) 2025-05-05 00:01:17.932164 | orchestrator | 00:01:17.932 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-05 00:01:17.932200 | orchestrator | 00:01:17.932 STDOUT terraform:  + allocation_pool { 2025-05-05 00:01:17.932215 | orchestrator | 00:01:17.932 STDOUT terraform:  + end = "192.168.31.250" 2025-05-05 00:01:17.932251 | orchestrator | 00:01:17.932 STDOUT terraform:  + start = "192.168.31.200" 2025-05-05 00:01:17.932264 | orchestrator | 00:01:17.932 STDOUT terraform:  } 2025-05-05 00:01:17.932277 | orchestrator | 00:01:17.932 STDOUT terraform:  } 2025-05-05 00:01:17.932290 | orchestrator | 00:01:17.932 STDOUT terraform:  # terraform_data.image will be created 2025-05-05 00:01:17.932304 | orchestrator | 00:01:17.932 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-05 00:01:17.932341 | orchestrator | 00:01:17.932 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.932362 | orchestrator | 00:01:17.932 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-05 00:01:17.932377 | orchestrator | 00:01:17.932 STDOUT terraform:  + output = (known after apply) 2025-05-05 00:01:17.932420 | orchestrator | 00:01:17.932 STDOUT terraform:  } 2025-05-05 00:01:17.932436 | orchestrator | 00:01:17.932 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-05 00:01:17.932472 | orchestrator | 00:01:17.932 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-05 00:01:17.932488 | orchestrator | 00:01:17.932 STDOUT terraform:  + id = (known after apply) 2025-05-05 00:01:17.932499 | orchestrator | 00:01:17.932 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-05 00:01:17.932513 | orchestrator | 00:01:17.932 STDOUT terraform:  + output = (known after apply) 2025-05-05 00:01:17.932552 | orchestrator | 00:01:17.932 STDOUT terraform:  } 2025-05-05 00:01:17.932567 | orchestrator | 00:01:17.932 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-05-05 00:01:17.932578 | orchestrator | 00:01:17.932 STDOUT terraform: Changes to Outputs: 2025-05-05 00:01:17.932591 | orchestrator | 00:01:17.932 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-05 00:01:17.932605 | orchestrator | 00:01:17.932 STDOUT terraform:  + private_key = (sensitive value) 2025-05-05 00:01:18.166929 | orchestrator | 00:01:18.166 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-05 00:01:18.167069 | orchestrator | 00:01:18.166 STDOUT terraform: terraform_data.image: Creating... 2025-05-05 00:01:18.167098 | orchestrator | 00:01:18.166 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=db80405c-8e45-17e7-141e-3bc043586322] 2025-05-05 00:01:18.170377 | orchestrator | 00:01:18.170 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=1caf046a-a6a3-c74f-b397-dd5513f86a64] 2025-05-05 00:01:18.178752 | orchestrator | 00:01:18.178 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-05 00:01:18.189323 | orchestrator | 00:01:18.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-05-05 00:01:18.189373 | orchestrator | 00:01:18.189 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-05 00:01:18.189434 | orchestrator | 00:01:18.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-05-05 00:01:18.189941 | orchestrator | 00:01:18.189 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-05 00:01:18.190411 | orchestrator | 00:01:18.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-05 00:01:18.190661 | orchestrator | 00:01:18.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-05 00:01:18.191133 | orchestrator | 00:01:18.191 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-05 00:01:18.191537 | orchestrator | 00:01:18.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-05 00:01:18.191617 | orchestrator | 00:01:18.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-05 00:01:18.654534 | orchestrator | 00:01:18.654 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-05 00:01:18.657131 | orchestrator | 00:01:18.656 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-05 00:01:18.664583 | orchestrator | 00:01:18.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-05-05 00:01:18.670293 | orchestrator | 00:01:18.670 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-05 00:01:19.991979 | orchestrator | 00:01:19.991 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 2s [id=testbed] 2025-05-05 00:01:20.000569 | orchestrator | 00:01:20.000 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-05-05 00:01:25.059179 | orchestrator | 00:01:25.058 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 7s [id=efc37236-a30d-4e8d-bca5-d9414dc6adb9] 2025-05-05 00:01:25.068811 | orchestrator | 00:01:25.068 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-05-05 00:01:28.191155 | orchestrator | 00:01:28.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-05-05 00:01:28.191376 | orchestrator | 00:01:28.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-05-05 00:01:28.191531 | orchestrator | 00:01:28.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-05 00:01:28.192764 | orchestrator | 00:01:28.192 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-05 00:01:28.193114 | orchestrator | 00:01:28.192 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-05 00:01:28.665637 | orchestrator | 00:01:28.192 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-05 00:01:28.665773 | orchestrator | 00:01:28.665 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-05-05 00:01:28.670678 | orchestrator | 00:01:28.670 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-05 00:01:28.752381 | orchestrator | 00:01:28.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=538b5ef1-8671-4fc9-a3c4-cba69448f95c] 2025-05-05 00:01:28.759264 | orchestrator | 00:01:28.759 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-05 00:01:28.764615 | orchestrator | 00:01:28.764 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=4d0bf700-f9e0-49dc-ac25-e14623495170] 2025-05-05 00:01:28.771313 | orchestrator | 00:01:28.771 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-05-05 00:01:28.781637 | orchestrator | 00:01:28.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=3d84f93e-1c6d-4691-b492-2a4ac16c3944] 2025-05-05 00:01:28.792980 | orchestrator | 00:01:28.792 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=4fe28f7c-d5bd-43b5-ae36-5544cd531e3f] 2025-05-05 00:01:28.794254 | orchestrator | 00:01:28.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-05 00:01:28.799348 | orchestrator | 00:01:28.799 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=6b4716be-1a57-4f60-96f3-25458ff8018c] 2025-05-05 00:01:28.800268 | orchestrator | 00:01:28.800 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-05 00:01:28.807086 | orchestrator | 00:01:28.806 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-05-05 00:01:28.808929 | orchestrator | 00:01:28.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=af745260-1df8-42ba-a894-c5ed39f05370] 2025-05-05 00:01:28.815017 | orchestrator | 00:01:28.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-05 00:01:28.856881 | orchestrator | 00:01:28.856 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=4cdb59ba-b27c-4aba-91f1-5fb12951bb58] 2025-05-05 00:01:28.860830 | orchestrator | 00:01:28.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=cf82fd11-af58-4978-8cf4-434466d92b22] 2025-05-05 00:01:28.864912 | orchestrator | 00:01:28.864 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-05-05 00:01:28.867556 | orchestrator | 00:01:28.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-05-05 00:01:30.001777 | orchestrator | 00:01:30.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-05-05 00:01:30.167190 | orchestrator | 00:01:30.166 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8] 2025-05-05 00:01:30.177314 | orchestrator | 00:01:30.177 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-05 00:01:35.072950 | orchestrator | 00:01:35.072 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-05-05 00:01:35.235330 | orchestrator | 00:01:35.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=408b0152-937f-48ea-b624-2492cd2dac87] 2025-05-05 00:01:35.243642 | orchestrator | 00:01:35.243 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-05 00:01:38.760455 | orchestrator | 00:01:38.760 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-05 00:01:38.772570 | orchestrator | 00:01:38.772 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-05-05 00:01:38.794883 | orchestrator | 00:01:38.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-05 00:01:38.801119 | orchestrator | 00:01:38.800 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-05 00:01:38.808463 | orchestrator | 00:01:38.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-05-05 00:01:38.815721 | orchestrator | 00:01:38.815 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-05 00:01:38.866477 | orchestrator | 00:01:38.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-05-05 00:01:38.868553 | orchestrator | 00:01:38.868 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-05-05 00:01:38.968782 | orchestrator | 00:01:38.968 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=06746298-857f-44a7-bac9-458d0cb80917] 2025-05-05 00:01:38.977399 | orchestrator | 00:01:38.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=615e20fc-a585-4d17-960f-58a126b0377d] 2025-05-05 00:01:38.989492 | orchestrator | 00:01:38.989 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-05 00:01:38.990566 | orchestrator | 00:01:38.990 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-05 00:01:39.014446 | orchestrator | 00:01:39.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=2486f75a-e60a-48fd-8d37-a608e25639e6] 2025-05-05 00:01:39.015032 | orchestrator | 00:01:39.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=d858a9fc-f161-4032-83a3-99286d7d6b6e] 2025-05-05 00:01:39.022199 | orchestrator | 00:01:39.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-05 00:01:39.024123 | orchestrator | 00:01:39.023 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-05 00:01:39.043359 | orchestrator | 00:01:39.042 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=2284275b-81dd-4b13-b1ce-7a79fe4b7203] 2025-05-05 00:01:39.050625 | orchestrator | 00:01:39.050 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-05 00:01:39.064847 | orchestrator | 00:01:39.064 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=42838bfa-cc1b-4702-98d9-e28ebdac68d7] 2025-05-05 00:01:39.071095 | orchestrator | 00:01:39.070 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=781aa17e-e7c9-4602-9f68-f5aa193f4164] 2025-05-05 00:01:39.080426 | orchestrator | 00:01:39.080 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-05 00:01:39.085653 | orchestrator | 00:01:39.085 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-05 00:01:39.087513 | orchestrator | 00:01:39.087 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=c8514ff6e97659e60258f4bd8f12d63843a519b3] 2025-05-05 00:01:39.088210 | orchestrator | 00:01:39.088 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=42a6e7e5-8ee1-4531-a79c-d61afd2d8a10] 2025-05-05 00:01:39.094708 | orchestrator | 00:01:39.094 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=6c21e6b8d2a189f3fb7dffcbd1ac71f192fa991b] 2025-05-05 00:01:39.095073 | orchestrator | 00:01:39.095 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-05 00:01:40.179077 | orchestrator | 00:01:40.178 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-05 00:01:40.541183 | orchestrator | 00:01:40.540 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=1816abff-2c25-4262-8906-0081839edd92] 2025-05-05 00:01:45.244655 | orchestrator | 00:01:45.244 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-05 00:01:45.555853 | orchestrator | 00:01:45.555 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=e63a3641-9ab8-401e-ae51-b6341150c0e4] 2025-05-05 00:01:45.604365 | orchestrator | 00:01:45.603 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 7s [id=39edf4cb-ee2b-4dc1-9d0a-8dbccd926015] 2025-05-05 00:01:45.613500 | orchestrator | 00:01:45.613 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-05 00:01:48.991175 | orchestrator | 00:01:48.990 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-05 00:01:48.991297 | orchestrator | 00:01:48.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-05 00:01:49.023370 | orchestrator | 00:01:49.023 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-05 00:01:49.024472 | orchestrator | 00:01:49.024 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-05 00:01:49.051874 | orchestrator | 00:01:49.051 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-05 00:01:49.351834 | orchestrator | 00:01:49.351 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=34b4e4a4-0893-4c21-853f-0a97d76ef819] 2025-05-05 00:01:49.353057 | orchestrator | 00:01:49.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=303ddda9-04c8-4db7-a324-20b01373288b] 2025-05-05 00:01:49.364681 | orchestrator | 00:01:49.364 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=daf28dc1-fcee-4d4a-964d-2a80f7bc2af3] 2025-05-05 00:01:49.401303 | orchestrator | 00:01:49.400 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=3969d65f-a534-4e1c-b0b2-b40e2f22590e] 2025-05-05 00:01:49.438169 | orchestrator | 00:01:49.437 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=097579e1-2678-44d8-867c-c822514847b8] 2025-05-05 00:01:53.203210 | orchestrator | 00:01:53.202 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=5669e33b-6ecf-42fc-b660-48772bae4125] 2025-05-05 00:01:53.215954 | orchestrator | 00:01:53.215 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-05 00:01:53.216323 | orchestrator | 00:01:53.216 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-05 00:01:53.217224 | orchestrator | 00:01:53.217 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-05 00:01:53.352559 | orchestrator | 00:01:53.352 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=dc859715-34ef-471c-b80e-722f7f3ed2f4] 2025-05-05 00:01:53.361961 | orchestrator | 00:01:53.361 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-05 00:01:53.362094 | orchestrator | 00:01:53.361 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-05 00:01:53.362750 | orchestrator | 00:01:53.362 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-05 00:01:53.363707 | orchestrator | 00:01:53.363 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-05 00:01:53.369559 | orchestrator | 00:01:53.369 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-05 00:01:53.372644 | orchestrator | 00:01:53.372 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-05 00:01:53.386758 | orchestrator | 00:01:53.386 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=e0ea0c45-6cb4-4782-94ed-c7f03a32649c] 2025-05-05 00:01:53.395252 | orchestrator | 00:01:53.394 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-05 00:01:53.398731 | orchestrator | 00:01:53.398 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-05 00:01:53.403035 | orchestrator | 00:01:53.402 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-05 00:01:53.512009 | orchestrator | 00:01:53.511 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=6be50726-02ae-4dd5-bdde-809fcc4e3e56] 2025-05-05 00:01:53.527942 | orchestrator | 00:01:53.527 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-05 00:01:53.622378 | orchestrator | 00:01:53.621 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=64d24bd1-d795-4c93-8a0a-37c132bc1fc1] 2025-05-05 00:01:53.639646 | orchestrator | 00:01:53.639 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-05 00:01:54.701342 | orchestrator | 00:01:54.700 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=c98e7d54-febf-420f-8d82-d36b3378f8ce] 2025-05-05 00:01:54.713285 | orchestrator | 00:01:54.713 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-05 00:01:54.806431 | orchestrator | 00:01:54.806 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=cbc5eb38-bb48-409b-84e4-69e41b9fdef1] 2025-05-05 00:01:54.822918 | orchestrator | 00:01:54.822 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-05 00:01:54.912674 | orchestrator | 00:01:54.912 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=b4441372-480d-433f-8231-709a15366f17] 2025-05-05 00:01:54.927538 | orchestrator | 00:01:54.927 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-05 00:01:55.017551 | orchestrator | 00:01:55.017 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=6bb062f4-3150-4015-95dd-e7e8cf50e0c1] 2025-05-05 00:01:55.028089 | orchestrator | 00:01:55.027 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-05 00:01:55.136120 | orchestrator | 00:01:55.135 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=5182cb8f-ad99-4b70-be74-bf186510ee8b] 2025-05-05 00:01:55.143882 | orchestrator | 00:01:55.143 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-05 00:01:55.197668 | orchestrator | 00:01:55.197 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=32638439-49a4-4907-a4ac-c02e2d04da20] 2025-05-05 00:01:55.310710 | orchestrator | 00:01:55.310 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=47286833-254a-467c-9004-9365f0343be3] 2025-05-05 00:01:58.917670 | orchestrator | 00:01:58.917 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=32d7ff85-d588-4a83-86a1-d6228b9bd83d] 2025-05-05 00:01:59.094988 | orchestrator | 00:01:59.094 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=73a0f3a4-daf7-4c39-a9e4-19080753b1cb] 2025-05-05 00:01:59.121574 | orchestrator | 00:01:59.121 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=a3b7930f-3241-4251-87d9-1f17959145bb] 2025-05-05 00:01:59.195250 | orchestrator | 00:01:59.194 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=40c386c7-a1a2-4bc1-a765-1cd0b02e3a79] 2025-05-05 00:02:00.283603 | orchestrator | 00:02:00.283 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=e53178c4-c1b5-4276-bf8e-ae958fbd9218] 2025-05-05 00:02:00.542894 | orchestrator | 00:02:00.542 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=99b7a487-129a-4313-ba2d-3879ca3bb64b] 2025-05-05 00:02:00.562010 | orchestrator | 00:02:00.561 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=e75006bb-396b-4545-8d65-990c2c517039] 2025-05-05 00:02:00.583712 | orchestrator | 00:02:00.583 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=a60875ea-7cf6-44b4-a4c7-a3ed41b3322e] 2025-05-05 00:02:00.604068 | orchestrator | 00:02:00.603 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-05 00:02:00.624014 | orchestrator | 00:02:00.623 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-05 00:02:00.624066 | orchestrator | 00:02:00.623 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-05 00:02:00.624086 | orchestrator | 00:02:00.624 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-05 00:02:00.624094 | orchestrator | 00:02:00.624 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-05 00:02:00.628667 | orchestrator | 00:02:00.628 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-05 00:02:00.634760 | orchestrator | 00:02:00.634 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-05 00:02:06.954995 | orchestrator | 00:02:06.954 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=88e12153-99c0-4a75-9dd7-f4155abf45fe] 2025-05-05 00:02:06.967864 | orchestrator | 00:02:06.967 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-05 00:02:06.971453 | orchestrator | 00:02:06.971 STDOUT terraform: local_file.inventory: Creating... 2025-05-05 00:02:06.972516 | orchestrator | 00:02:06.972 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-05 00:02:06.976496 | orchestrator | 00:02:06.976 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1e37200253f63174f1968df0de9ee08064530f91] 2025-05-05 00:02:06.978519 | orchestrator | 00:02:06.978 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=fc98aac5bfa68f03dedc9686e6cdb9dd9c379ecd] 2025-05-05 00:02:07.505967 | orchestrator | 00:02:07.505 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=88e12153-99c0-4a75-9dd7-f4155abf45fe] 2025-05-05 00:02:10.621997 | orchestrator | 00:02:10.621 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-05 00:02:10.624996 | orchestrator | 00:02:10.624 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-05 00:02:10.632108 | orchestrator | 00:02:10.631 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-05 00:02:10.633320 | orchestrator | 00:02:10.633 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-05 00:02:10.633504 | orchestrator | 00:02:10.633 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-05 00:02:10.640688 | orchestrator | 00:02:10.640 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-05 00:02:20.624378 | orchestrator | 00:02:20.624 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-05 00:02:20.625315 | orchestrator | 00:02:20.625 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-05 00:02:20.632799 | orchestrator | 00:02:20.632 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-05 00:02:20.634393 | orchestrator | 00:02:20.634 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-05 00:02:20.634504 | orchestrator | 00:02:20.634 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-05 00:02:20.641569 | orchestrator | 00:02:20.641 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-05 00:02:21.064616 | orchestrator | 00:02:21.064 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=656c1bca-be04-4d50-ab72-f13f269194e4] 2025-05-05 00:02:21.095658 | orchestrator | 00:02:21.095 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=47ccc3a1-c421-4de7-97f8-718f5c570714] 2025-05-05 00:02:21.113126 | orchestrator | 00:02:21.112 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=0b79d52c-b7ee-4b31-ac95-aef2d3903243] 2025-05-05 00:02:21.156470 | orchestrator | 00:02:21.156 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=66248a15-237c-4803-b880-c45bba990936] 2025-05-05 00:02:30.624839 | orchestrator | 00:02:30.624 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-05 00:02:30.634569 | orchestrator | 00:02:30.634 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-05 00:02:31.376106 | orchestrator | 00:02:31.375 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=10da1563-de57-4253-bbd7-1de09bf9ccb2] 2025-05-05 00:02:31.509741 | orchestrator | 00:02:31.509 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=1f9f1920-8851-4b0f-b8b1-012ddff2f57a] 2025-05-05 00:02:31.533737 | orchestrator | 00:02:31.533 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-05-05 00:02:31.535207 | orchestrator | 00:02:31.535 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-05 00:02:31.550421 | orchestrator | 00:02:31.550 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-05-05 00:02:31.551146 | orchestrator | 00:02:31.550 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-05 00:02:31.551191 | orchestrator | 00:02:31.550 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-05-05 00:02:31.551217 | orchestrator | 00:02:31.550 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-05 00:02:31.551228 | orchestrator | 00:02:31.551 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1900505647304336832] 2025-05-05 00:02:31.562913 | orchestrator | 00:02:31.562 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-05-05 00:02:31.565926 | orchestrator | 00:02:31.565 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-05 00:02:31.570774 | orchestrator | 00:02:31.570 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-05 00:02:31.572776 | orchestrator | 00:02:31.572 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-05-05 00:02:31.574134 | orchestrator | 00:02:31.573 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-05 00:02:36.906452 | orchestrator | 00:02:36.905 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=1f9f1920-8851-4b0f-b8b1-012ddff2f57a/6b4716be-1a57-4f60-96f3-25458ff8018c] 2025-05-05 00:02:36.919344 | orchestrator | 00:02:36.918 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=0b79d52c-b7ee-4b31-ac95-aef2d3903243/42a6e7e5-8ee1-4531-a79c-d61afd2d8a10] 2025-05-05 00:02:36.933614 | orchestrator | 00:02:36.918 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-05-05 00:02:36.933742 | orchestrator | 00:02:36.933 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=656c1bca-be04-4d50-ab72-f13f269194e4/6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8] 2025-05-05 00:02:36.945446 | orchestrator | 00:02:36.944 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=0b79d52c-b7ee-4b31-ac95-aef2d3903243/615e20fc-a585-4d17-960f-58a126b0377d] 2025-05-05 00:02:36.947939 | orchestrator | 00:02:36.945 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=10da1563-de57-4253-bbd7-1de09bf9ccb2/d858a9fc-f161-4032-83a3-99286d7d6b6e] 2025-05-05 00:02:36.948019 | orchestrator | 00:02:36.947 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=656c1bca-be04-4d50-ab72-f13f269194e4/2486f75a-e60a-48fd-8d37-a608e25639e6] 2025-05-05 00:02:36.949493 | orchestrator | 00:02:36.949 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-05-05 00:02:36.957166 | orchestrator | 00:02:36.956 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=1f9f1920-8851-4b0f-b8b1-012ddff2f57a/538b5ef1-8671-4fc9-a3c4-cba69448f95c] 2025-05-05 00:02:36.960846 | orchestrator | 00:02:36.960 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-05-05 00:02:36.962167 | orchestrator | 00:02:36.961 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=47ccc3a1-c421-4de7-97f8-718f5c570714/06746298-857f-44a7-bac9-458d0cb80917] 2025-05-05 00:02:36.965797 | orchestrator | 00:02:36.965 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-05 00:02:36.966426 | orchestrator | 00:02:36.966 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-05 00:02:36.974390 | orchestrator | 00:02:36.974 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=66248a15-237c-4803-b880-c45bba990936/2284275b-81dd-4b13-b1ce-7a79fe4b7203] 2025-05-05 00:02:36.976702 | orchestrator | 00:02:36.976 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-05 00:02:36.980030 | orchestrator | 00:02:36.979 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=10da1563-de57-4253-bbd7-1de09bf9ccb2/4d0bf700-f9e0-49dc-ac25-e14623495170] 2025-05-05 00:02:36.982850 | orchestrator | 00:02:36.982 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-05-05 00:02:36.987147 | orchestrator | 00:02:36.987 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-05 00:02:37.000084 | orchestrator | 00:02:36.999 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-05 00:02:42.226665 | orchestrator | 00:02:42.226 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=10da1563-de57-4253-bbd7-1de09bf9ccb2/781aa17e-e7c9-4602-9f68-f5aa193f4164] 2025-05-05 00:02:42.266081 | orchestrator | 00:02:42.265 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=47ccc3a1-c421-4de7-97f8-718f5c570714/4cdb59ba-b27c-4aba-91f1-5fb12951bb58] 2025-05-05 00:02:42.286389 | orchestrator | 00:02:42.285 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=656c1bca-be04-4d50-ab72-f13f269194e4/42838bfa-cc1b-4702-98d9-e28ebdac68d7] 2025-05-05 00:02:42.315155 | orchestrator | 00:02:42.314 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=47ccc3a1-c421-4de7-97f8-718f5c570714/cf82fd11-af58-4978-8cf4-434466d92b22] 2025-05-05 00:02:42.316297 | orchestrator | 00:02:42.315 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=0b79d52c-b7ee-4b31-ac95-aef2d3903243/af745260-1df8-42ba-a894-c5ed39f05370] 2025-05-05 00:02:42.328723 | orchestrator | 00:02:42.328 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=1f9f1920-8851-4b0f-b8b1-012ddff2f57a/3d84f93e-1c6d-4691-b492-2a4ac16c3944] 2025-05-05 00:02:42.342601 | orchestrator | 00:02:42.342 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=66248a15-237c-4803-b880-c45bba990936/408b0152-937f-48ea-b624-2492cd2dac87] 2025-05-05 00:02:42.371159 | orchestrator | 00:02:42.370 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=66248a15-237c-4803-b880-c45bba990936/4fe28f7c-d5bd-43b5-ae36-5544cd531e3f] 2025-05-05 00:02:47.000948 | orchestrator | 00:02:47.000 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-05 00:02:57.001435 | orchestrator | 00:02:57.001 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-05 00:02:58.419869 | orchestrator | 00:02:58.419 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=63b9e3b9-0095-4bda-8d63-7e5fb548c9ea] 2025-05-05 00:02:58.435912 | orchestrator | 00:02:58.435 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-05-05 00:02:58.443577 | orchestrator | 00:02:58.435 STDOUT terraform: Outputs: 2025-05-05 00:02:58.443780 | orchestrator | 00:02:58.435 STDOUT terraform: manager_address = 2025-05-05 00:02:58.443827 | orchestrator | 00:02:58.435 STDOUT terraform: private_key = 2025-05-05 00:03:08.613248 | orchestrator | changed 2025-05-05 00:03:08.651382 | 2025-05-05 00:03:08.651522 | TASK [Fetch manager address] 2025-05-05 00:03:09.102086 | orchestrator | ok 2025-05-05 00:03:09.130995 | 2025-05-05 00:03:09.131288 | TASK [Set manager_host address] 2025-05-05 00:03:09.255530 | orchestrator | ok 2025-05-05 00:03:09.264293 | 2025-05-05 00:03:09.264430 | LOOP [Update ansible collections] 2025-05-05 00:03:10.097120 | orchestrator | changed 2025-05-05 00:03:10.909082 | orchestrator | changed 2025-05-05 00:03:10.935264 | 2025-05-05 00:03:10.935438 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-05 00:03:21.529435 | orchestrator | ok 2025-05-05 00:03:21.542663 | 2025-05-05 00:03:21.542819 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-05 00:04:21.595017 | orchestrator | ok 2025-05-05 00:04:21.605093 | 2025-05-05 00:04:21.605200 | TASK [Fetch manager ssh hostkey] 2025-05-05 00:04:22.706978 | orchestrator | Output suppressed because no_log was given 2025-05-05 00:04:22.718663 | 2025-05-05 00:04:22.718801 | TASK [Get ssh keypair from terraform environment] 2025-05-05 00:04:23.261899 | orchestrator | changed 2025-05-05 00:04:23.282168 | 2025-05-05 00:04:23.282318 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-05 00:04:23.335727 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-05 00:04:23.346921 | 2025-05-05 00:04:23.347035 | TASK [Run manager part 0] 2025-05-05 00:04:24.218544 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-05 00:04:24.260718 | orchestrator | 2025-05-05 00:04:26.033524 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-05 00:04:26.033612 | orchestrator | 2025-05-05 00:04:26.033642 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-05 00:04:26.033662 | orchestrator | ok: [testbed-manager] 2025-05-05 00:04:27.899381 | orchestrator | 2025-05-05 00:04:27.899443 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-05 00:04:27.899457 | orchestrator | 2025-05-05 00:04:27.899464 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:04:27.899477 | orchestrator | ok: [testbed-manager] 2025-05-05 00:04:28.546958 | orchestrator | 2025-05-05 00:04:28.547022 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-05 00:04:28.547046 | orchestrator | ok: [testbed-manager] 2025-05-05 00:04:28.599551 | orchestrator | 2025-05-05 00:04:28.599585 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-05 00:04:28.599598 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:04:28.629347 | orchestrator | 2025-05-05 00:04:28.629371 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-05 00:04:28.629382 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:04:28.658212 | orchestrator | 2025-05-05 00:04:28.658266 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-05 00:04:28.658295 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:04:28.685280 | orchestrator | 2025-05-05 00:04:28.685334 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-05 00:04:28.685352 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:04:28.711116 | orchestrator | 2025-05-05 00:04:28.711135 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-05 00:04:28.711145 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:04:28.739575 | orchestrator | 2025-05-05 00:04:28.739614 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-05 00:04:28.739626 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:04:28.767955 | orchestrator | 2025-05-05 00:04:28.767978 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-05 00:04:28.767989 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:04:29.587486 | orchestrator | 2025-05-05 00:04:29.587548 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-05 00:04:29.587564 | orchestrator | changed: [testbed-manager] 2025-05-05 00:07:42.357677 | orchestrator | 2025-05-05 00:07:42.357819 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-05 00:07:42.357900 | orchestrator | changed: [testbed-manager] 2025-05-05 00:09:09.272000 | orchestrator | 2025-05-05 00:09:09.272115 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-05 00:09:09.272151 | orchestrator | changed: [testbed-manager] 2025-05-05 00:09:36.037829 | orchestrator | 2025-05-05 00:09:36.037943 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-05 00:09:36.037980 | orchestrator | changed: [testbed-manager] 2025-05-05 00:09:44.546908 | orchestrator | 2025-05-05 00:09:44.547061 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-05 00:09:44.547103 | orchestrator | changed: [testbed-manager] 2025-05-05 00:09:44.599829 | orchestrator | 2025-05-05 00:09:44.599923 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-05 00:09:44.599956 | orchestrator | ok: [testbed-manager] 2025-05-05 00:09:45.402779 | orchestrator | 2025-05-05 00:09:45.402879 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-05 00:09:45.402902 | orchestrator | ok: [testbed-manager] 2025-05-05 00:09:46.128320 | orchestrator | 2025-05-05 00:09:46.128427 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-05 00:09:46.128478 | orchestrator | changed: [testbed-manager] 2025-05-05 00:09:52.482829 | orchestrator | 2025-05-05 00:09:52.482886 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-05 00:09:52.482908 | orchestrator | changed: [testbed-manager] 2025-05-05 00:09:58.267291 | orchestrator | 2025-05-05 00:09:58.267365 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-05 00:09:58.267394 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:00.781590 | orchestrator | 2025-05-05 00:10:00.781724 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-05 00:10:00.781764 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:02.585540 | orchestrator | 2025-05-05 00:10:02.585663 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-05 00:10:02.585702 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:03.720979 | orchestrator | 2025-05-05 00:10:03.721090 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-05 00:10:03.721126 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-05 00:10:03.763269 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-05 00:10:03.763328 | orchestrator | 2025-05-05 00:10:03.763337 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-05 00:10:03.763351 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-05 00:10:06.983708 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-05 00:10:06.983871 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-05 00:10:06.983895 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-05 00:10:06.983930 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-05 00:10:07.556042 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-05 00:10:07.556150 | orchestrator | 2025-05-05 00:10:07.556171 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-05 00:10:07.556201 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:29.417083 | orchestrator | 2025-05-05 00:10:29.417145 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-05 00:10:29.417166 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-05 00:10:31.696046 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-05 00:10:31.696091 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-05 00:10:31.696097 | orchestrator | 2025-05-05 00:10:31.696105 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-05 00:10:31.696117 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-05 00:10:33.126825 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-05 00:10:33.126877 | orchestrator | 2025-05-05 00:10:33.126885 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-05 00:10:33.126892 | orchestrator | 2025-05-05 00:10:33.126898 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:10:33.126912 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:33.176748 | orchestrator | 2025-05-05 00:10:33.176831 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-05 00:10:33.176851 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:33.243225 | orchestrator | 2025-05-05 00:10:33.243280 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-05 00:10:33.243301 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:33.970906 | orchestrator | 2025-05-05 00:10:33.970958 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-05 00:10:33.970980 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:34.707278 | orchestrator | 2025-05-05 00:10:34.707331 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-05 00:10:34.707350 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:36.090908 | orchestrator | 2025-05-05 00:10:36.091035 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-05 00:10:36.091073 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-05 00:10:37.502633 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-05 00:10:37.502744 | orchestrator | 2025-05-05 00:10:37.502764 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-05 00:10:37.502818 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:39.258359 | orchestrator | 2025-05-05 00:10:39.258481 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-05 00:10:39.258538 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-05 00:10:39.837701 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-05 00:10:39.838476 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-05 00:10:39.838507 | orchestrator | 2025-05-05 00:10:39.838526 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-05 00:10:39.838560 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:39.911431 | orchestrator | 2025-05-05 00:10:39.911535 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-05 00:10:39.911571 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:10:40.782159 | orchestrator | 2025-05-05 00:10:40.782269 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-05 00:10:40.782307 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:10:40.823540 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:40.823620 | orchestrator | 2025-05-05 00:10:40.823638 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-05 00:10:40.823666 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:10:40.863969 | orchestrator | 2025-05-05 00:10:40.864047 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-05 00:10:40.864077 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:10:40.904913 | orchestrator | 2025-05-05 00:10:40.905010 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-05 00:10:40.905045 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:10:40.958110 | orchestrator | 2025-05-05 00:10:40.958222 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-05 00:10:40.958261 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:10:41.679628 | orchestrator | 2025-05-05 00:10:41.679684 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-05 00:10:41.679701 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:43.147091 | orchestrator | 2025-05-05 00:10:43.147193 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-05 00:10:43.147214 | orchestrator | 2025-05-05 00:10:43.147230 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:10:43.147260 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:44.112619 | orchestrator | 2025-05-05 00:10:44.112682 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-05 00:10:44.112698 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:44.233952 | orchestrator | 2025-05-05 00:10:44.234193 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:10:44.234206 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-05 00:10:44.234212 | orchestrator | 2025-05-05 00:10:44.637731 | orchestrator | changed 2025-05-05 00:10:44.658878 | 2025-05-05 00:10:44.659019 | TASK [Point out that the log in on the manager is now possible] 2025-05-05 00:10:44.708898 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-05 00:10:44.719941 | 2025-05-05 00:10:44.720065 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-05 00:10:44.753263 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-05 00:10:44.761945 | 2025-05-05 00:10:44.762051 | TASK [Run manager part 1 + 2] 2025-05-05 00:10:45.656826 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-05 00:10:45.720924 | orchestrator | 2025-05-05 00:10:48.273261 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-05 00:10:48.273343 | orchestrator | 2025-05-05 00:10:48.273358 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:10:48.273381 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:48.313328 | orchestrator | 2025-05-05 00:10:48.313405 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-05 00:10:48.313431 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:10:48.360431 | orchestrator | 2025-05-05 00:10:48.360503 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-05 00:10:48.360525 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:48.403991 | orchestrator | 2025-05-05 00:10:48.404067 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-05 00:10:48.404090 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:48.472042 | orchestrator | 2025-05-05 00:10:48.472116 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-05 00:10:48.472138 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:48.534124 | orchestrator | 2025-05-05 00:10:48.534198 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-05 00:10:48.534220 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:48.585015 | orchestrator | 2025-05-05 00:10:48.585087 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-05 00:10:48.585106 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-05 00:10:49.305203 | orchestrator | 2025-05-05 00:10:49.305291 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-05 00:10:49.305313 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:49.358460 | orchestrator | 2025-05-05 00:10:49.358535 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-05 00:10:49.358555 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:10:50.733626 | orchestrator | 2025-05-05 00:10:50.733713 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-05 00:10:50.733742 | orchestrator | changed: [testbed-manager] 2025-05-05 00:10:51.304427 | orchestrator | 2025-05-05 00:10:51.304514 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-05 00:10:51.304538 | orchestrator | ok: [testbed-manager] 2025-05-05 00:10:52.449084 | orchestrator | 2025-05-05 00:10:52.449179 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-05 00:10:52.449213 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:05.327306 | orchestrator | 2025-05-05 00:11:05.327445 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-05 00:11:05.327484 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:06.005650 | orchestrator | 2025-05-05 00:11:06.005797 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-05 00:11:06.005838 | orchestrator | ok: [testbed-manager] 2025-05-05 00:11:06.058737 | orchestrator | 2025-05-05 00:11:06.058881 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-05 00:11:06.058917 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:11:07.010461 | orchestrator | 2025-05-05 00:11:07.010603 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-05 00:11:07.010662 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:07.983039 | orchestrator | 2025-05-05 00:11:07.983168 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-05 00:11:07.983206 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:08.552789 | orchestrator | 2025-05-05 00:11:08.552904 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-05 00:11:08.552943 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:08.592026 | orchestrator | 2025-05-05 00:11:08.592100 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-05 00:11:08.592129 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-05 00:11:10.886701 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-05 00:11:10.886843 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-05 00:11:10.886866 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-05 00:11:10.886910 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:19.849290 | orchestrator | 2025-05-05 00:11:19.849454 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-05 00:11:19.849501 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-05 00:11:20.913317 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-05 00:11:20.913433 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-05 00:11:20.913451 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-05 00:11:20.913467 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-05 00:11:20.913481 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-05 00:11:20.913496 | orchestrator | 2025-05-05 00:11:20.913511 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-05 00:11:20.913559 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:20.959967 | orchestrator | 2025-05-05 00:11:20.960068 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-05 00:11:20.960101 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:11:24.059256 | orchestrator | 2025-05-05 00:11:24.059313 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-05 00:11:24.059330 | orchestrator | changed: [testbed-manager] 2025-05-05 00:11:24.104164 | orchestrator | 2025-05-05 00:11:24.104220 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-05 00:11:24.104239 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:12:55.344788 | orchestrator | 2025-05-05 00:12:55.344942 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-05 00:12:55.344983 | orchestrator | changed: [testbed-manager] 2025-05-05 00:12:56.443718 | orchestrator | 2025-05-05 00:12:56.443833 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-05 00:12:56.443882 | orchestrator | ok: [testbed-manager] 2025-05-05 00:12:56.536543 | orchestrator | 2025-05-05 00:12:56.536654 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:12:56.536731 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-05 00:12:56.536763 | orchestrator | 2025-05-05 00:12:56.957318 | orchestrator | changed 2025-05-05 00:12:56.968792 | 2025-05-05 00:12:56.968925 | TASK [Reboot manager] 2025-05-05 00:12:58.515137 | orchestrator | changed 2025-05-05 00:12:58.534797 | 2025-05-05 00:12:58.534956 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-05 00:13:12.360609 | orchestrator | ok 2025-05-05 00:13:12.372249 | 2025-05-05 00:13:12.372363 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-05 00:14:12.420219 | orchestrator | ok 2025-05-05 00:14:12.431390 | 2025-05-05 00:14:12.431509 | TASK [Deploy manager + bootstrap nodes] 2025-05-05 00:14:14.787800 | orchestrator | 2025-05-05 00:14:14.790975 | orchestrator | # DEPLOY MANAGER 2025-05-05 00:14:14.791025 | orchestrator | 2025-05-05 00:14:14.791044 | orchestrator | + set -e 2025-05-05 00:14:14.791138 | orchestrator | + echo 2025-05-05 00:14:14.791161 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-05 00:14:14.791179 | orchestrator | + echo 2025-05-05 00:14:14.791204 | orchestrator | + cat /opt/manager-vars.sh 2025-05-05 00:14:14.791243 | orchestrator | export NUMBER_OF_NODES=6 2025-05-05 00:14:14.792083 | orchestrator | 2025-05-05 00:14:14.792156 | orchestrator | export CEPH_VERSION=reef 2025-05-05 00:14:14.792177 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-05 00:14:14.792232 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-05 00:14:14.792251 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-05 00:14:14.792306 | orchestrator | 2025-05-05 00:14:14.792337 | orchestrator | export ARA=false 2025-05-05 00:14:14.792354 | orchestrator | export TEMPEST=false 2025-05-05 00:14:14.792369 | orchestrator | export IS_ZUUL=true 2025-05-05 00:14:14.792384 | orchestrator | 2025-05-05 00:14:14.792398 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-05 00:14:14.792414 | orchestrator | export EXTERNAL_API=false 2025-05-05 00:14:14.792428 | orchestrator | 2025-05-05 00:14:14.792442 | orchestrator | export IMAGE_USER=ubuntu 2025-05-05 00:14:14.792456 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-05 00:14:14.792472 | orchestrator | 2025-05-05 00:14:14.792487 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-05 00:14:14.792527 | orchestrator | 2025-05-05 00:14:14.792544 | orchestrator | + echo 2025-05-05 00:14:14.792558 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-05 00:14:14.792583 | orchestrator | ++ export INTERACTIVE=false 2025-05-05 00:14:14.841034 | orchestrator | ++ INTERACTIVE=false 2025-05-05 00:14:14.841155 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-05 00:14:14.841187 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-05 00:14:14.841203 | orchestrator | + source /opt/manager-vars.sh 2025-05-05 00:14:14.841218 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-05 00:14:14.841236 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-05 00:14:14.841251 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-05 00:14:14.841265 | orchestrator | ++ CEPH_VERSION=reef 2025-05-05 00:14:14.841279 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-05 00:14:14.841295 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-05 00:14:14.841319 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-05 00:14:14.841334 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-05 00:14:14.841348 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-05 00:14:14.841362 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-05 00:14:14.841377 | orchestrator | ++ export ARA=false 2025-05-05 00:14:14.841391 | orchestrator | ++ ARA=false 2025-05-05 00:14:14.841406 | orchestrator | ++ export TEMPEST=false 2025-05-05 00:14:14.841420 | orchestrator | ++ TEMPEST=false 2025-05-05 00:14:14.841434 | orchestrator | ++ export IS_ZUUL=true 2025-05-05 00:14:14.841448 | orchestrator | ++ IS_ZUUL=true 2025-05-05 00:14:14.841463 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-05 00:14:14.841479 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-05 00:14:14.841501 | orchestrator | ++ export EXTERNAL_API=false 2025-05-05 00:14:14.841515 | orchestrator | ++ EXTERNAL_API=false 2025-05-05 00:14:14.841529 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-05 00:14:14.841544 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-05 00:14:14.841558 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-05 00:14:14.841571 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-05 00:14:14.841589 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-05 00:14:14.841604 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-05 00:14:14.841618 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-05 00:14:14.841688 | orchestrator | + docker version 2025-05-05 00:14:15.081499 | orchestrator | Client: Docker Engine - Community 2025-05-05 00:14:15.084393 | orchestrator | Version: 26.1.4 2025-05-05 00:14:15.084448 | orchestrator | API version: 1.45 2025-05-05 00:14:15.084473 | orchestrator | Go version: go1.21.11 2025-05-05 00:14:15.084501 | orchestrator | Git commit: 5650f9b 2025-05-05 00:14:15.084528 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-05 00:14:15.084548 | orchestrator | OS/Arch: linux/amd64 2025-05-05 00:14:15.084562 | orchestrator | Context: default 2025-05-05 00:14:15.084577 | orchestrator | 2025-05-05 00:14:15.084592 | orchestrator | Server: Docker Engine - Community 2025-05-05 00:14:15.084606 | orchestrator | Engine: 2025-05-05 00:14:15.084620 | orchestrator | Version: 26.1.4 2025-05-05 00:14:15.084635 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-05 00:14:15.084649 | orchestrator | Go version: go1.21.11 2025-05-05 00:14:15.084696 | orchestrator | Git commit: de5c9cf 2025-05-05 00:14:15.084748 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-05 00:14:15.084764 | orchestrator | OS/Arch: linux/amd64 2025-05-05 00:14:15.084778 | orchestrator | Experimental: false 2025-05-05 00:14:15.084793 | orchestrator | containerd: 2025-05-05 00:14:15.084807 | orchestrator | Version: 1.7.27 2025-05-05 00:14:15.084821 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-05 00:14:15.084836 | orchestrator | runc: 2025-05-05 00:14:15.084851 | orchestrator | Version: 1.2.5 2025-05-05 00:14:15.084866 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-05 00:14:15.084880 | orchestrator | docker-init: 2025-05-05 00:14:15.084894 | orchestrator | Version: 0.19.0 2025-05-05 00:14:15.084909 | orchestrator | GitCommit: de40ad0 2025-05-05 00:14:15.084933 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-05 00:14:15.093698 | orchestrator | + set -e 2025-05-05 00:14:15.093794 | orchestrator | + source /opt/manager-vars.sh 2025-05-05 00:14:15.093833 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-05 00:14:15.093848 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-05 00:14:15.093863 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-05 00:14:15.093878 | orchestrator | ++ CEPH_VERSION=reef 2025-05-05 00:14:15.093892 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-05 00:14:15.093907 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-05 00:14:15.093921 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-05 00:14:15.093935 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-05 00:14:15.093949 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-05 00:14:15.093964 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-05 00:14:15.093978 | orchestrator | ++ export ARA=false 2025-05-05 00:14:15.093992 | orchestrator | ++ ARA=false 2025-05-05 00:14:15.094007 | orchestrator | ++ export TEMPEST=false 2025-05-05 00:14:15.094075 | orchestrator | ++ TEMPEST=false 2025-05-05 00:14:15.094090 | orchestrator | ++ export IS_ZUUL=true 2025-05-05 00:14:15.094104 | orchestrator | ++ IS_ZUUL=true 2025-05-05 00:14:15.094120 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-05 00:14:15.094134 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-05 00:14:15.094149 | orchestrator | ++ export EXTERNAL_API=false 2025-05-05 00:14:15.094163 | orchestrator | ++ EXTERNAL_API=false 2025-05-05 00:14:15.094177 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-05 00:14:15.094203 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-05 00:14:15.100208 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-05 00:14:15.100263 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-05 00:14:15.100278 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-05 00:14:15.100304 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-05 00:14:15.100319 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-05 00:14:15.100334 | orchestrator | ++ export INTERACTIVE=false 2025-05-05 00:14:15.100348 | orchestrator | ++ INTERACTIVE=false 2025-05-05 00:14:15.100363 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-05 00:14:15.100377 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-05 00:14:15.100392 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-05 00:14:15.100410 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-05 00:14:15.100435 | orchestrator | + set -e 2025-05-05 00:14:15.110442 | orchestrator | + VERSION=8.1.0 2025-05-05 00:14:15.110491 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-05 00:14:15.110531 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-05 00:14:15.114436 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-05 00:14:15.114478 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-05 00:14:15.117629 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-05 00:14:15.127362 | orchestrator | /opt/configuration ~ 2025-05-05 00:14:15.128615 | orchestrator | + set -e 2025-05-05 00:14:15.128703 | orchestrator | + pushd /opt/configuration 2025-05-05 00:14:15.128722 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-05 00:14:15.128752 | orchestrator | + source /opt/venv/bin/activate 2025-05-05 00:14:15.129948 | orchestrator | ++ deactivate nondestructive 2025-05-05 00:14:15.129977 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:15.129992 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:15.130007 | orchestrator | ++ hash -r 2025-05-05 00:14:15.130071 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:15.130087 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-05 00:14:15.130102 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-05 00:14:15.130118 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-05 00:14:15.130163 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-05 00:14:15.130179 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-05 00:14:15.130193 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-05 00:14:15.130210 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-05 00:14:15.130225 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-05 00:14:15.130241 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-05 00:14:15.130255 | orchestrator | ++ export PATH 2025-05-05 00:14:15.130270 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:15.130284 | orchestrator | ++ '[' -z '' ']' 2025-05-05 00:14:15.130298 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-05 00:14:15.130312 | orchestrator | ++ PS1='(venv) ' 2025-05-05 00:14:15.130331 | orchestrator | ++ export PS1 2025-05-05 00:14:16.031544 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-05 00:14:16.031736 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-05 00:14:16.031758 | orchestrator | ++ hash -r 2025-05-05 00:14:16.031776 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-05 00:14:16.031817 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-05 00:14:16.033479 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-05 00:14:16.033791 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-05 00:14:16.034925 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-05 00:14:16.036113 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-05 00:14:16.045718 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-05-05 00:14:16.047147 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-05 00:14:16.048227 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-05 00:14:16.049568 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-05 00:14:16.078444 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-05 00:14:16.079814 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-05 00:14:16.081335 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-05 00:14:16.082808 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-05 00:14:16.086828 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-05 00:14:16.288125 | orchestrator | ++ which gilt 2025-05-05 00:14:16.291476 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-05 00:14:16.502998 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-05 00:14:16.503161 | orchestrator | osism.cfg-generics: 2025-05-05 00:14:17.985056 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-05 00:14:17.985247 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-05 00:14:18.846890 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-05 00:14:18.847062 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-05 00:14:18.847084 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-05 00:14:18.847124 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-05 00:14:18.852920 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-05 00:14:19.341392 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-05 00:14:19.389798 | orchestrator | ~ 2025-05-05 00:14:19.391958 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-05 00:14:19.392009 | orchestrator | + deactivate 2025-05-05 00:14:19.392050 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-05 00:14:19.392069 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-05 00:14:19.392083 | orchestrator | + export PATH 2025-05-05 00:14:19.392098 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-05 00:14:19.392113 | orchestrator | + '[' -n '' ']' 2025-05-05 00:14:19.392127 | orchestrator | + hash -r 2025-05-05 00:14:19.392142 | orchestrator | + '[' -n '' ']' 2025-05-05 00:14:19.392156 | orchestrator | + unset VIRTUAL_ENV 2025-05-05 00:14:19.392171 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-05 00:14:19.392185 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-05 00:14:19.392204 | orchestrator | + unset -f deactivate 2025-05-05 00:14:19.392218 | orchestrator | + popd 2025-05-05 00:14:19.392242 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-05 00:14:19.393096 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-05 00:14:19.393128 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-05 00:14:19.456529 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-05 00:14:19.494585 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-05 00:14:19.494783 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-05 00:14:19.494816 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-05 00:14:19.494969 | orchestrator | + source /opt/venv/bin/activate 2025-05-05 00:14:19.494996 | orchestrator | ++ deactivate nondestructive 2025-05-05 00:14:19.495030 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:19.495049 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:19.495101 | orchestrator | ++ hash -r 2025-05-05 00:14:19.495118 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:19.495133 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-05 00:14:19.495184 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-05 00:14:19.495203 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-05 00:14:19.495570 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-05 00:14:19.495592 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-05 00:14:19.495607 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-05 00:14:19.495623 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-05 00:14:19.495643 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-05 00:14:19.495812 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-05 00:14:19.495830 | orchestrator | ++ export PATH 2025-05-05 00:14:19.495846 | orchestrator | ++ '[' -n '' ']' 2025-05-05 00:14:19.495865 | orchestrator | ++ '[' -z '' ']' 2025-05-05 00:14:19.496005 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-05 00:14:19.496023 | orchestrator | ++ PS1='(venv) ' 2025-05-05 00:14:19.496037 | orchestrator | ++ export PS1 2025-05-05 00:14:19.496052 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-05 00:14:19.496071 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-05 00:14:19.496284 | orchestrator | ++ hash -r 2025-05-05 00:14:19.496307 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-05 00:14:20.503163 | orchestrator | 2025-05-05 00:14:21.062301 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-05 00:14:21.062464 | orchestrator | 2025-05-05 00:14:21.062486 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-05 00:14:21.062525 | orchestrator | ok: [testbed-manager] 2025-05-05 00:14:22.018624 | orchestrator | 2025-05-05 00:14:22.018829 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-05 00:14:22.018872 | orchestrator | changed: [testbed-manager] 2025-05-05 00:14:24.220186 | orchestrator | 2025-05-05 00:14:24.220356 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-05 00:14:24.220380 | orchestrator | 2025-05-05 00:14:24.220397 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:14:24.220431 | orchestrator | ok: [testbed-manager] 2025-05-05 00:14:29.173439 | orchestrator | 2025-05-05 00:14:29.173602 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-05 00:14:29.173762 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-05 00:15:44.918379 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-05-05 00:15:44.918568 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-05 00:15:44.918593 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-05 00:15:44.918610 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-05 00:15:44.918626 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-05-05 00:15:44.918728 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-05 00:15:44.918752 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-05 00:15:44.918767 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-05 00:15:44.918791 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-05-05 00:15:44.918807 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-05-05 00:15:44.918822 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-05-05 00:15:44.918837 | orchestrator | 2025-05-05 00:15:44.918851 | orchestrator | TASK [Check status] ************************************************************ 2025-05-05 00:15:44.918884 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-05 00:15:44.968172 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-05 00:15:44.968297 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-05 00:15:44.968316 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-05 00:15:44.968333 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j245043597177.1586', 'results_file': '/home/dragon/.ansible_async/j245043597177.1586', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968369 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j939227455033.1611', 'results_file': '/home/dragon/.ansible_async/j939227455033.1611', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968385 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-05 00:15:44.968400 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j357406024798.1636', 'results_file': '/home/dragon/.ansible_async/j357406024798.1636', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968422 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j356721960908.1668', 'results_file': '/home/dragon/.ansible_async/j356721960908.1668', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968441 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-05 00:15:44.968456 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j547753793496.1700', 'results_file': '/home/dragon/.ansible_async/j547753793496.1700', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968471 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j512520159464.1732', 'results_file': '/home/dragon/.ansible_async/j512520159464.1732', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968485 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-05 00:15:44.968500 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j31733454390.1764', 'results_file': '/home/dragon/.ansible_async/j31733454390.1764', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968547 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j813738095422.1797', 'results_file': '/home/dragon/.ansible_async/j813738095422.1797', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968562 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j883352889107.1835', 'results_file': '/home/dragon/.ansible_async/j883352889107.1835', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968577 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j714134780204.1860', 'results_file': '/home/dragon/.ansible_async/j714134780204.1860', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968591 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j78442510874.1892', 'results_file': '/home/dragon/.ansible_async/j78442510874.1892', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968606 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j163814642420.1932', 'results_file': '/home/dragon/.ansible_async/j163814642420.1932', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-05 00:15:44.968620 | orchestrator | 2025-05-05 00:15:44.968681 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-05 00:15:44.968718 | orchestrator | ok: [testbed-manager] 2025-05-05 00:15:45.424149 | orchestrator | 2025-05-05 00:15:45.424259 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-05 00:15:45.424282 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:45.775591 | orchestrator | 2025-05-05 00:15:45.775778 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-05 00:15:45.775818 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:46.115449 | orchestrator | 2025-05-05 00:15:46.115577 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-05 00:15:46.115616 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:46.162407 | orchestrator | 2025-05-05 00:15:46.162458 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-05 00:15:46.162484 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:15:46.511734 | orchestrator | 2025-05-05 00:15:46.511859 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-05 00:15:46.511894 | orchestrator | ok: [testbed-manager] 2025-05-05 00:15:46.640108 | orchestrator | 2025-05-05 00:15:46.640240 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-05 00:15:46.640279 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:15:48.442193 | orchestrator | 2025-05-05 00:15:48.442339 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-05 00:15:48.442361 | orchestrator | 2025-05-05 00:15:48.442377 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:15:48.442415 | orchestrator | ok: [testbed-manager] 2025-05-05 00:15:48.537237 | orchestrator | 2025-05-05 00:15:48.537359 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-05 00:15:48.537398 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-05 00:15:48.592965 | orchestrator | 2025-05-05 00:15:48.593052 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-05 00:15:48.593084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-05 00:15:49.671744 | orchestrator | 2025-05-05 00:15:49.671861 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-05 00:15:49.671884 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-05 00:15:51.439386 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-05 00:15:51.439522 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-05 00:15:51.439542 | orchestrator | 2025-05-05 00:15:51.439558 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-05 00:15:51.439590 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-05 00:15:52.087572 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-05 00:15:52.087770 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-05 00:15:52.087794 | orchestrator | 2025-05-05 00:15:52.087811 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-05 00:15:52.087843 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:15:52.732004 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:52.732129 | orchestrator | 2025-05-05 00:15:52.732150 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-05 00:15:52.732184 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:15:52.784441 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:52.784586 | orchestrator | 2025-05-05 00:15:52.784616 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-05 00:15:52.784692 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:15:53.120007 | orchestrator | 2025-05-05 00:15:53.120169 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-05 00:15:53.120227 | orchestrator | ok: [testbed-manager] 2025-05-05 00:15:53.176758 | orchestrator | 2025-05-05 00:15:53.176876 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-05 00:15:53.176914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-05 00:15:54.162918 | orchestrator | 2025-05-05 00:15:54.163050 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-05 00:15:54.163091 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:54.967204 | orchestrator | 2025-05-05 00:15:54.967353 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-05 00:15:54.967391 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:58.073073 | orchestrator | 2025-05-05 00:15:58.073202 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-05 00:15:58.073240 | orchestrator | changed: [testbed-manager] 2025-05-05 00:15:58.178705 | orchestrator | 2025-05-05 00:15:58.178821 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-05 00:15:58.178858 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-05 00:15:58.238699 | orchestrator | 2025-05-05 00:15:58.238795 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-05 00:15:58.238828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-05 00:16:00.633752 | orchestrator | 2025-05-05 00:16:00.633874 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-05 00:16:00.633909 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:00.732155 | orchestrator | 2025-05-05 00:16:00.732255 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-05 00:16:00.732289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-05 00:16:01.788195 | orchestrator | 2025-05-05 00:16:01.788332 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-05 00:16:01.788373 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-05 00:16:01.858816 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-05 00:16:01.858937 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-05 00:16:01.858954 | orchestrator | 2025-05-05 00:16:01.858970 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-05 00:16:01.859029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-05 00:16:02.485389 | orchestrator | 2025-05-05 00:16:02.485518 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-05 00:16:02.485554 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-05 00:16:03.102258 | orchestrator | 2025-05-05 00:16:03.102400 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-05 00:16:03.102468 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:03.736923 | orchestrator | 2025-05-05 00:16:03.737074 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-05 00:16:03.737134 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:16:04.121494 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:04.121598 | orchestrator | 2025-05-05 00:16:04.121611 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-05 00:16:04.121675 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:04.475146 | orchestrator | 2025-05-05 00:16:04.475266 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-05 00:16:04.475302 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:04.527131 | orchestrator | 2025-05-05 00:16:04.527279 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-05 00:16:04.527316 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:05.128164 | orchestrator | 2025-05-05 00:16:05.128324 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-05 00:16:05.128366 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:05.206117 | orchestrator | 2025-05-05 00:16:05.206213 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-05 00:16:05.206245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-05 00:16:05.933389 | orchestrator | 2025-05-05 00:16:05.933518 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-05 00:16:05.933557 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-05 00:16:06.557687 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-05 00:16:06.557821 | orchestrator | 2025-05-05 00:16:06.557845 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-05 00:16:06.557877 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-05 00:16:07.189117 | orchestrator | 2025-05-05 00:16:07.189246 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-05 00:16:07.189282 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:07.227586 | orchestrator | 2025-05-05 00:16:07.227681 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-05 00:16:07.227712 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:07.873894 | orchestrator | 2025-05-05 00:16:07.874114 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-05 00:16:07.874167 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:09.620839 | orchestrator | 2025-05-05 00:16:09.620964 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-05 00:16:09.621000 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:16:15.377904 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:16:15.378752 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:16:15.378786 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:15.378805 | orchestrator | 2025-05-05 00:16:15.378821 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-05 00:16:15.378855 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-05 00:16:16.005701 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-05 00:16:16.005843 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-05 00:16:16.005864 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-05 00:16:16.005880 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-05 00:16:16.005896 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-05 00:16:16.005936 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-05 00:16:16.005951 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-05 00:16:16.005967 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-05 00:16:16.005982 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-05 00:16:16.005996 | orchestrator | 2025-05-05 00:16:16.006012 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-05 00:16:16.006111 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-05 00:16:16.092164 | orchestrator | 2025-05-05 00:16:16.092271 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-05 00:16:16.092306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-05 00:16:16.754618 | orchestrator | 2025-05-05 00:16:16.754735 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-05 00:16:16.754755 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:17.345914 | orchestrator | 2025-05-05 00:16:17.346106 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-05 00:16:17.346148 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:18.058241 | orchestrator | 2025-05-05 00:16:18.058442 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-05 00:16:18.058485 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:23.635045 | orchestrator | 2025-05-05 00:16:23.635182 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-05 00:16:23.635222 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:24.525965 | orchestrator | 2025-05-05 00:16:24.526164 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-05 00:16:24.526203 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:46.615561 | orchestrator | 2025-05-05 00:16:46.615753 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-05 00:16:46.615796 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-05 00:16:46.670122 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:46.670243 | orchestrator | 2025-05-05 00:16:46.670264 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-05 00:16:46.670299 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:46.712592 | orchestrator | 2025-05-05 00:16:46.712706 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-05 00:16:46.712715 | orchestrator | 2025-05-05 00:16:46.712722 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-05 00:16:46.712739 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:46.769309 | orchestrator | 2025-05-05 00:16:46.769415 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-05 00:16:46.769443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-05 00:16:47.534398 | orchestrator | 2025-05-05 00:16:47.534526 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-05 00:16:47.534563 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:47.593739 | orchestrator | 2025-05-05 00:16:47.593825 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-05 00:16:47.593857 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:47.635548 | orchestrator | 2025-05-05 00:16:47.635605 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-05 00:16:47.635667 | orchestrator | ok: [testbed-manager] => { 2025-05-05 00:16:48.267259 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-05 00:16:48.267393 | orchestrator | } 2025-05-05 00:16:48.267414 | orchestrator | 2025-05-05 00:16:48.267430 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-05 00:16:48.267462 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:49.134095 | orchestrator | 2025-05-05 00:16:49.134222 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-05 00:16:49.134291 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:49.206508 | orchestrator | 2025-05-05 00:16:49.206662 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-05 00:16:49.206716 | orchestrator | ok: [testbed-manager] 2025-05-05 00:16:49.254563 | orchestrator | 2025-05-05 00:16:49.254690 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-05 00:16:49.254739 | orchestrator | ok: [testbed-manager] => { 2025-05-05 00:16:49.310249 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-05 00:16:49.310332 | orchestrator | } 2025-05-05 00:16:49.310349 | orchestrator | 2025-05-05 00:16:49.310363 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-05 00:16:49.310393 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:49.359743 | orchestrator | 2025-05-05 00:16:49.359827 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-05 00:16:49.359858 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:49.416198 | orchestrator | 2025-05-05 00:16:49.416300 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-05 00:16:49.416333 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:49.466184 | orchestrator | 2025-05-05 00:16:49.466261 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-05 00:16:49.466290 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:49.519075 | orchestrator | 2025-05-05 00:16:49.519199 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-05 00:16:49.519248 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:49.577912 | orchestrator | 2025-05-05 00:16:49.578078 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-05 00:16:49.578125 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:16:50.808841 | orchestrator | 2025-05-05 00:16:50.808979 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-05 00:16:50.809019 | orchestrator | changed: [testbed-manager] 2025-05-05 00:16:50.885105 | orchestrator | 2025-05-05 00:16:50.885218 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-05 00:16:50.885253 | orchestrator | ok: [testbed-manager] 2025-05-05 00:17:50.939751 | orchestrator | 2025-05-05 00:17:50.939861 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-05 00:17:50.939884 | orchestrator | Pausing for 60 seconds 2025-05-05 00:17:50.997705 | orchestrator | changed: [testbed-manager] 2025-05-05 00:17:50.997793 | orchestrator | 2025-05-05 00:17:50.997808 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-05 00:17:50.997841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-05 00:21:30.792010 | orchestrator | 2025-05-05 00:21:30.792152 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-05 00:21:30.792191 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-05 00:21:32.665592 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-05 00:21:32.665678 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-05 00:21:32.665692 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-05 00:21:32.665702 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-05 00:21:32.665711 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-05 00:21:32.665720 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-05 00:21:32.665729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-05 00:21:32.665737 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-05 00:21:32.665746 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-05 00:21:32.665774 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-05 00:21:32.665783 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-05 00:21:32.665792 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-05 00:21:32.665801 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-05 00:21:32.665809 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-05 00:21:32.665818 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-05 00:21:32.665827 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-05 00:21:32.665835 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-05 00:21:32.665844 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-05 00:21:32.665860 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-05 00:21:32.665869 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-05 00:21:32.665878 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:32.665888 | orchestrator | 2025-05-05 00:21:32.665897 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-05 00:21:32.665906 | orchestrator | 2025-05-05 00:21:32.665915 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:21:32.665934 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:32.780395 | orchestrator | 2025-05-05 00:21:32.780519 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-05 00:21:32.780554 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-05 00:21:32.836404 | orchestrator | 2025-05-05 00:21:32.836520 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-05 00:21:32.836552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-05 00:21:34.357342 | orchestrator | 2025-05-05 00:21:34.357457 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-05 00:21:34.357531 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:34.411533 | orchestrator | 2025-05-05 00:21:34.411643 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-05 00:21:34.411678 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:34.499368 | orchestrator | 2025-05-05 00:21:34.499468 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-05 00:21:34.499533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-05 00:21:37.078916 | orchestrator | 2025-05-05 00:21:37.079033 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-05 00:21:37.079071 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-05 00:21:37.698472 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-05 00:21:37.698621 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-05 00:21:37.698640 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-05 00:21:37.698655 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-05 00:21:37.698670 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-05 00:21:37.698685 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-05 00:21:37.698699 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-05 00:21:37.698714 | orchestrator | 2025-05-05 00:21:37.698729 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-05 00:21:37.698761 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:37.785241 | orchestrator | 2025-05-05 00:21:37.785355 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-05 00:21:37.785389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-05 00:21:38.939244 | orchestrator | 2025-05-05 00:21:38.939355 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-05 00:21:38.939383 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-05 00:21:39.575856 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-05 00:21:39.575990 | orchestrator | 2025-05-05 00:21:39.576013 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-05 00:21:39.576048 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:39.636343 | orchestrator | 2025-05-05 00:21:39.636432 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-05 00:21:39.636464 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:21:39.688964 | orchestrator | 2025-05-05 00:21:39.689091 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-05 00:21:39.689126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-05 00:21:41.029725 | orchestrator | 2025-05-05 00:21:41.029862 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-05 00:21:41.029899 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:21:41.643531 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:21:41.643661 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:41.643681 | orchestrator | 2025-05-05 00:21:41.643697 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-05 00:21:41.643729 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:41.730426 | orchestrator | 2025-05-05 00:21:41.730602 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-05 00:21:41.730642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-05 00:21:42.348106 | orchestrator | 2025-05-05 00:21:42.348235 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-05 00:21:42.348271 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:21:42.963289 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:42.963415 | orchestrator | 2025-05-05 00:21:42.963437 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-05 00:21:42.963470 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:43.064826 | orchestrator | 2025-05-05 00:21:43.064940 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-05 00:21:43.064975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-05 00:21:43.627525 | orchestrator | 2025-05-05 00:21:43.627653 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-05 00:21:43.627702 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:44.036400 | orchestrator | 2025-05-05 00:21:44.036572 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-05 00:21:44.036605 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:45.247526 | orchestrator | 2025-05-05 00:21:45.247662 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-05 00:21:45.247700 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-05 00:21:45.999595 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-05 00:21:45.999722 | orchestrator | 2025-05-05 00:21:45.999743 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-05 00:21:45.999776 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:46.386648 | orchestrator | 2025-05-05 00:21:46.386786 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-05 00:21:46.386826 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:46.744878 | orchestrator | 2025-05-05 00:21:46.745015 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-05 00:21:46.745079 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:46.794263 | orchestrator | 2025-05-05 00:21:46.794353 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-05 00:21:46.794386 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:21:46.870594 | orchestrator | 2025-05-05 00:21:46.870699 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-05 00:21:46.870732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-05 00:21:46.917124 | orchestrator | 2025-05-05 00:21:46.917275 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-05 00:21:46.917316 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:48.914626 | orchestrator | 2025-05-05 00:21:48.914784 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-05 00:21:48.914826 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-05 00:21:49.627680 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-05 00:21:49.627832 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-05 00:21:49.627852 | orchestrator | 2025-05-05 00:21:49.627868 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-05 00:21:49.627903 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:50.337298 | orchestrator | 2025-05-05 00:21:50.337448 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-05 00:21:50.337540 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:51.037572 | orchestrator | 2025-05-05 00:21:51.037722 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-05 00:21:51.037762 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:51.102333 | orchestrator | 2025-05-05 00:21:51.102470 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-05 00:21:51.102542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-05 00:21:51.158073 | orchestrator | 2025-05-05 00:21:51.158226 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-05 00:21:51.158268 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:51.859178 | orchestrator | 2025-05-05 00:21:51.859350 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-05 00:21:51.859410 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-05 00:21:51.948813 | orchestrator | 2025-05-05 00:21:51.948948 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-05 00:21:51.948987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-05 00:21:52.647055 | orchestrator | 2025-05-05 00:21:52.647215 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-05 00:21:52.647257 | orchestrator | changed: [testbed-manager] 2025-05-05 00:21:53.248777 | orchestrator | 2025-05-05 00:21:53.248906 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-05 00:21:53.248972 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:53.292946 | orchestrator | 2025-05-05 00:21:53.293029 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-05 00:21:53.293060 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:21:53.346309 | orchestrator | 2025-05-05 00:21:53.346401 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-05 00:21:53.346433 | orchestrator | ok: [testbed-manager] 2025-05-05 00:21:54.179268 | orchestrator | 2025-05-05 00:21:54.179396 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-05 00:21:54.179434 | orchestrator | changed: [testbed-manager] 2025-05-05 00:22:36.960818 | orchestrator | 2025-05-05 00:22:36.961002 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-05 00:22:36.961045 | orchestrator | changed: [testbed-manager] 2025-05-05 00:22:37.604346 | orchestrator | 2025-05-05 00:22:37.604560 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-05 00:22:37.604600 | orchestrator | ok: [testbed-manager] 2025-05-05 00:22:40.232753 | orchestrator | 2025-05-05 00:22:40.232913 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-05 00:22:40.232953 | orchestrator | changed: [testbed-manager] 2025-05-05 00:22:40.300308 | orchestrator | 2025-05-05 00:22:40.300426 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-05 00:22:40.300498 | orchestrator | ok: [testbed-manager] 2025-05-05 00:22:40.362987 | orchestrator | 2025-05-05 00:22:40.363077 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-05 00:22:40.363097 | orchestrator | 2025-05-05 00:22:40.363114 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-05 00:22:40.363147 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:23:40.425515 | orchestrator | 2025-05-05 00:23:40.425693 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-05 00:23:40.425735 | orchestrator | Pausing for 60 seconds 2025-05-05 00:23:45.350686 | orchestrator | changed: [testbed-manager] 2025-05-05 00:23:45.350845 | orchestrator | 2025-05-05 00:23:45.350868 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-05 00:23:45.350907 | orchestrator | changed: [testbed-manager] 2025-05-05 00:24:26.978707 | orchestrator | 2025-05-05 00:24:26.978871 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-05 00:24:26.978911 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-05 00:24:32.550943 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-05 00:24:32.551125 | orchestrator | changed: [testbed-manager] 2025-05-05 00:24:32.551151 | orchestrator | 2025-05-05 00:24:32.551193 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-05 00:24:32.551227 | orchestrator | changed: [testbed-manager] 2025-05-05 00:24:32.647718 | orchestrator | 2025-05-05 00:24:32.647830 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-05 00:24:32.647865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-05 00:24:32.711492 | orchestrator | 2025-05-05 00:24:32.711594 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-05 00:24:32.711605 | orchestrator | 2025-05-05 00:24:32.711615 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-05 00:24:32.711637 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:24:32.836589 | orchestrator | 2025-05-05 00:24:32.836698 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:24:32.836717 | orchestrator | testbed-manager : ok=109 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-05 00:24:32.836733 | orchestrator | 2025-05-05 00:24:32.836765 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-05 00:24:32.846289 | orchestrator | + deactivate 2025-05-05 00:24:32.846392 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-05 00:24:32.846413 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-05 00:24:32.846428 | orchestrator | + export PATH 2025-05-05 00:24:32.846443 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-05 00:24:32.846459 | orchestrator | + '[' -n '' ']' 2025-05-05 00:24:32.846474 | orchestrator | + hash -r 2025-05-05 00:24:32.846488 | orchestrator | + '[' -n '' ']' 2025-05-05 00:24:32.846502 | orchestrator | + unset VIRTUAL_ENV 2025-05-05 00:24:32.846517 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-05 00:24:32.846531 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-05 00:24:32.846546 | orchestrator | + unset -f deactivate 2025-05-05 00:24:32.846561 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-05 00:24:32.846587 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-05 00:24:32.847065 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-05 00:24:32.847094 | orchestrator | + local max_attempts=60 2025-05-05 00:24:32.847111 | orchestrator | + local name=ceph-ansible 2025-05-05 00:24:32.847127 | orchestrator | + local attempt_num=1 2025-05-05 00:24:32.847147 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-05 00:24:32.877175 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-05 00:24:32.877920 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-05 00:24:32.877963 | orchestrator | + local max_attempts=60 2025-05-05 00:24:32.878094 | orchestrator | + local name=kolla-ansible 2025-05-05 00:24:32.878118 | orchestrator | + local attempt_num=1 2025-05-05 00:24:32.878141 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-05 00:24:32.909105 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-05 00:24:32.910400 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-05 00:24:32.910468 | orchestrator | + local max_attempts=60 2025-05-05 00:24:32.910494 | orchestrator | + local name=osism-ansible 2025-05-05 00:24:32.910519 | orchestrator | + local attempt_num=1 2025-05-05 00:24:32.910554 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-05 00:24:32.944110 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-05 00:24:33.580791 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-05 00:24:33.580918 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-05 00:24:33.580956 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-05 00:24:33.636569 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-05 00:24:33.827611 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-05 00:24:33.827720 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-05 00:24:33.827754 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-05 00:24:33.834114 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834163 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834178 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-05 00:24:33.834213 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-05 00:24:33.834228 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834248 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834262 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834277 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-05 00:24:33.834291 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834305 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-05 00:24:33.834319 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834334 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834402 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-05 00:24:33.834419 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834434 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834448 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834462 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-05 00:24:33.834486 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-05 00:24:33.972942 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-05 00:24:33.980399 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-05 00:24:33.980441 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-05 00:24:33.980453 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-05 00:24:33.980464 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-05 00:24:33.980481 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-05 00:24:34.034542 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-05 00:24:34.039866 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-05 00:24:34.039971 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-05 00:24:35.575013 | orchestrator | 2025-05-05 00:24:35 | INFO  | Task 1eb76c4c-97d6-4a40-aece-a10cbff83eac (resolvconf) was prepared for execution. 2025-05-05 00:24:38.509235 | orchestrator | 2025-05-05 00:24:35 | INFO  | It takes a moment until task 1eb76c4c-97d6-4a40-aece-a10cbff83eac (resolvconf) has been started and output is visible here. 2025-05-05 00:24:38.509485 | orchestrator | 2025-05-05 00:24:38.509876 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-05 00:24:38.510649 | orchestrator | 2025-05-05 00:24:38.512092 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:24:38.512998 | orchestrator | Monday 05 May 2025 00:24:38 +0000 (0:00:00.082) 0:00:00.082 ************ 2025-05-05 00:24:42.563007 | orchestrator | ok: [testbed-manager] 2025-05-05 00:24:42.563409 | orchestrator | 2025-05-05 00:24:42.563659 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-05 00:24:42.563841 | orchestrator | Monday 05 May 2025 00:24:42 +0000 (0:00:04.055) 0:00:04.138 ************ 2025-05-05 00:24:42.623665 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:24:42.626066 | orchestrator | 2025-05-05 00:24:42.626483 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-05 00:24:42.627178 | orchestrator | Monday 05 May 2025 00:24:42 +0000 (0:00:00.061) 0:00:04.199 ************ 2025-05-05 00:24:42.716574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-05 00:24:42.718102 | orchestrator | 2025-05-05 00:24:42.794443 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-05 00:24:42.794553 | orchestrator | Monday 05 May 2025 00:24:42 +0000 (0:00:00.092) 0:00:04.292 ************ 2025-05-05 00:24:42.794588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-05 00:24:42.796811 | orchestrator | 2025-05-05 00:24:42.799973 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-05 00:24:42.800551 | orchestrator | Monday 05 May 2025 00:24:42 +0000 (0:00:00.076) 0:00:04.368 ************ 2025-05-05 00:24:43.848720 | orchestrator | ok: [testbed-manager] 2025-05-05 00:24:43.851179 | orchestrator | 2025-05-05 00:24:43.851235 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-05 00:24:43.851589 | orchestrator | Monday 05 May 2025 00:24:43 +0000 (0:00:01.054) 0:00:05.423 ************ 2025-05-05 00:24:43.897460 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:24:43.898500 | orchestrator | 2025-05-05 00:24:43.899043 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-05 00:24:43.899678 | orchestrator | Monday 05 May 2025 00:24:43 +0000 (0:00:00.049) 0:00:05.472 ************ 2025-05-05 00:24:44.364256 | orchestrator | ok: [testbed-manager] 2025-05-05 00:24:44.364647 | orchestrator | 2025-05-05 00:24:44.364803 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-05 00:24:44.365508 | orchestrator | Monday 05 May 2025 00:24:44 +0000 (0:00:00.465) 0:00:05.938 ************ 2025-05-05 00:24:44.445509 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:24:44.446106 | orchestrator | 2025-05-05 00:24:44.446162 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-05 00:24:44.446444 | orchestrator | Monday 05 May 2025 00:24:44 +0000 (0:00:00.081) 0:00:06.019 ************ 2025-05-05 00:24:44.995839 | orchestrator | changed: [testbed-manager] 2025-05-05 00:24:46.048538 | orchestrator | 2025-05-05 00:24:46.048670 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-05 00:24:46.048691 | orchestrator | Monday 05 May 2025 00:24:44 +0000 (0:00:00.548) 0:00:06.568 ************ 2025-05-05 00:24:46.048722 | orchestrator | changed: [testbed-manager] 2025-05-05 00:24:46.048876 | orchestrator | 2025-05-05 00:24:46.049504 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-05 00:24:46.050178 | orchestrator | Monday 05 May 2025 00:24:46 +0000 (0:00:01.054) 0:00:07.623 ************ 2025-05-05 00:24:47.007275 | orchestrator | ok: [testbed-manager] 2025-05-05 00:24:47.007612 | orchestrator | 2025-05-05 00:24:47.008320 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-05 00:24:47.009184 | orchestrator | Monday 05 May 2025 00:24:47 +0000 (0:00:00.958) 0:00:08.581 ************ 2025-05-05 00:24:47.089503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-05 00:24:47.090188 | orchestrator | 2025-05-05 00:24:47.091207 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-05 00:24:47.091924 | orchestrator | Monday 05 May 2025 00:24:47 +0000 (0:00:00.084) 0:00:08.665 ************ 2025-05-05 00:24:48.240160 | orchestrator | changed: [testbed-manager] 2025-05-05 00:24:48.241376 | orchestrator | 2025-05-05 00:24:48.241434 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:24:48.241722 | orchestrator | 2025-05-05 00:24:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:24:48.241748 | orchestrator | 2025-05-05 00:24:48 | INFO  | Please wait and do not abort execution. 2025-05-05 00:24:48.241770 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:24:48.242475 | orchestrator | 2025-05-05 00:24:48.243296 | orchestrator | Monday 05 May 2025 00:24:48 +0000 (0:00:01.147) 0:00:09.813 ************ 2025-05-05 00:24:48.243960 | orchestrator | =============================================================================== 2025-05-05 00:24:48.244392 | orchestrator | Gathering Facts --------------------------------------------------------- 4.06s 2025-05-05 00:24:48.244879 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2025-05-05 00:24:48.246268 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-05-05 00:24:48.246629 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2025-05-05 00:24:48.247472 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-05-05 00:24:48.248005 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-05-05 00:24:48.248310 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-05-05 00:24:48.248701 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-05 00:24:48.249280 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-05 00:24:48.249544 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-05 00:24:48.249990 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-05-05 00:24:48.250316 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-05 00:24:48.250884 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-05-05 00:24:48.611873 | orchestrator | + osism apply sshconfig 2025-05-05 00:24:49.999971 | orchestrator | 2025-05-05 00:24:49 | INFO  | Task d8a202ed-3952-4425-a1c1-c7b28cf3158f (sshconfig) was prepared for execution. 2025-05-05 00:24:52.941688 | orchestrator | 2025-05-05 00:24:49 | INFO  | It takes a moment until task d8a202ed-3952-4425-a1c1-c7b28cf3158f (sshconfig) has been started and output is visible here. 2025-05-05 00:24:52.941833 | orchestrator | 2025-05-05 00:24:52.942935 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-05 00:24:52.943003 | orchestrator | 2025-05-05 00:24:52.943404 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-05 00:24:52.945152 | orchestrator | Monday 05 May 2025 00:24:52 +0000 (0:00:00.100) 0:00:00.100 ************ 2025-05-05 00:24:53.512566 | orchestrator | ok: [testbed-manager] 2025-05-05 00:24:53.513459 | orchestrator | 2025-05-05 00:24:53.513507 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-05 00:24:53.514274 | orchestrator | Monday 05 May 2025 00:24:53 +0000 (0:00:00.570) 0:00:00.670 ************ 2025-05-05 00:24:53.991915 | orchestrator | changed: [testbed-manager] 2025-05-05 00:24:53.992287 | orchestrator | 2025-05-05 00:24:53.993156 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-05 00:24:53.993513 | orchestrator | Monday 05 May 2025 00:24:53 +0000 (0:00:00.480) 0:00:01.151 ************ 2025-05-05 00:24:59.605048 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-05 00:24:59.605816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-05 00:24:59.606278 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-05 00:24:59.607255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-05 00:24:59.607315 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-05 00:24:59.609127 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-05 00:24:59.609506 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-05 00:24:59.610178 | orchestrator | 2025-05-05 00:24:59.610970 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-05 00:24:59.611014 | orchestrator | Monday 05 May 2025 00:24:59 +0000 (0:00:05.611) 0:00:06.763 ************ 2025-05-05 00:24:59.679175 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:24:59.680985 | orchestrator | 2025-05-05 00:24:59.681018 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-05 00:24:59.681677 | orchestrator | Monday 05 May 2025 00:24:59 +0000 (0:00:00.075) 0:00:06.838 ************ 2025-05-05 00:25:00.248954 | orchestrator | changed: [testbed-manager] 2025-05-05 00:25:00.251021 | orchestrator | 2025-05-05 00:25:00.251167 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:25:00.252048 | orchestrator | 2025-05-05 00:25:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:25:00.252866 | orchestrator | 2025-05-05 00:25:00 | INFO  | Please wait and do not abort execution. 2025-05-05 00:25:00.252944 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:25:00.253512 | orchestrator | 2025-05-05 00:25:00.253971 | orchestrator | Monday 05 May 2025 00:25:00 +0000 (0:00:00.569) 0:00:07.408 ************ 2025-05-05 00:25:00.254439 | orchestrator | =============================================================================== 2025-05-05 00:25:00.255410 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.61s 2025-05-05 00:25:00.255778 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-05-05 00:25:00.256410 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-05-05 00:25:00.257097 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2025-05-05 00:25:00.257773 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-05 00:25:00.627219 | orchestrator | + osism apply known-hosts 2025-05-05 00:25:02.009058 | orchestrator | 2025-05-05 00:25:02 | INFO  | Task 9e9c2364-8780-44bf-84cf-c159cdb0cabe (known-hosts) was prepared for execution. 2025-05-05 00:25:05.026317 | orchestrator | 2025-05-05 00:25:02 | INFO  | It takes a moment until task 9e9c2364-8780-44bf-84cf-c159cdb0cabe (known-hosts) has been started and output is visible here. 2025-05-05 00:25:05.026558 | orchestrator | 2025-05-05 00:25:05.028478 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-05 00:25:05.028534 | orchestrator | 2025-05-05 00:25:05.030257 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-05 00:25:05.031000 | orchestrator | Monday 05 May 2025 00:25:05 +0000 (0:00:00.105) 0:00:00.105 ************ 2025-05-05 00:25:11.067566 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-05 00:25:11.067920 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-05 00:25:11.068285 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-05 00:25:11.069024 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-05 00:25:11.069857 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-05 00:25:11.070284 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-05 00:25:11.072618 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-05 00:25:11.073080 | orchestrator | 2025-05-05 00:25:11.073517 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-05 00:25:11.074272 | orchestrator | Monday 05 May 2025 00:25:11 +0000 (0:00:06.042) 0:00:06.147 ************ 2025-05-05 00:25:11.226822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-05 00:25:11.228362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-05 00:25:11.228406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-05 00:25:11.228453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-05 00:25:11.228478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-05 00:25:11.228682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-05 00:25:11.228787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-05 00:25:11.229303 | orchestrator | 2025-05-05 00:25:11.229358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:11.229780 | orchestrator | Monday 05 May 2025 00:25:11 +0000 (0:00:00.159) 0:00:06.307 ************ 2025-05-05 00:25:12.392019 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBArjLXpWDvP3WYhsnoozPmeHr+cptR40PetH2dCNECC1f+5yulk5X3jc1ThCEInvaUK9rh5DoCQl48zZtD5Zgg0=) 2025-05-05 00:25:12.392467 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNjFiIYDr2cFimyQiWrOIy6nyTrhf3/LVKhtElfQMXeZ/XrXPY8kJAvzMQMWDPxkGbitlaT374iG+eRs44HufMw+yGHMjuo1uSBtXIuleOqb68Ry9755/ycayIUgP+oJmhRbHl86wFKeYGfihYVQR9HV9sp7iyOHSPOK18t2zJ1p9MjAp6TrzKI+Bq+mNAjlvENyLrR59QGC1YNula2jNU8e7Z6QhKw5ipsPo9vZ5xN+lz1Sj6f95RdQ+14hzVYSAd1T0S91s2sL229B4Yj8IxiFxZvYuGVML/kj13IkKRWlhuxKX3/cmW9okk9oHg7o5fSZWzLS+CYxqCpdeR/Lsp6CgvRKlAkVYNLBOJm+jnyjqTmoeU0yNbzdgNxkSw0CCduLyrOnbyCdPZUwiwk7GAUGNZpi90ygP0yOrip9ygMDTHwTg+kmDnMBKwB05uyFaEP41zVvowzXXL1rCJwbfUZ97LZrOR0AzCMI481soietnQ8gRNn0DDiP5YgP07zn8=) 2025-05-05 00:25:12.392645 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOHOhz/im5kPIoHF73fVr/K6aTMzhZMasjGsdx0JPMbb) 2025-05-05 00:25:12.393375 | orchestrator | 2025-05-05 00:25:12.393938 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:12.395019 | orchestrator | Monday 05 May 2025 00:25:12 +0000 (0:00:01.165) 0:00:07.473 ************ 2025-05-05 00:25:13.401855 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD3UKukc19GH6+tla1DnFiROss35dM0E3C/49jl6Bg/YPjym2y7/mPY/Olkcho3DGSBLBPTYAJSjgBpNGCx9klFzkFKKedu/iYVmt9j7eLT6dFi/CEATb/LFg/OCR8ItKOCg1C6OMxGdKCe6LMA3Rb4d7pmdxjt3dwNAnHiwWlmO3kbMDPD+AhGKCPdb5/56TyqS/fCyeF2fXQHzDNVhTXcz67JOtC8LEdwH3wUbJZUvtU1WFEHDtWH68JA914QtdrwQs235gC1plq37KXx9iTwDg07EFcBexGCv0FPBuipqyRbJEAkbHty7OdIMTxYbMeJgmygA4DEr99jWc7hklShoORQl/owlZMB5j+lm0bCf8KL+oSSYOIyz/R2KiSuCZD+2FV2RVRc0gcgmCRmmoCpekQySDPv7s04mPWjCi/AUtflfWOcUjn3UIaWDpbMK3RDXqOwKI/QeL4Ah9Ny2EPj3jyEQyLxRAbs7kFsUkmY0zUc0G8m7/aIfqf4tiYQjvk=) 2025-05-05 00:25:13.402199 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDPFJShPuAi8PQT/rsh1IEHYMcWMr3kXa6BsTCB8MWX17wIsSCsdIlNKH2eMWxlLVqKz2uaBNKYgJgELhBVvMlI=) 2025-05-05 00:25:13.402246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBIlJKNDcJXLo6T8z65fL8M/rFYfHbxZHgfzWF/0o/i/) 2025-05-05 00:25:13.402751 | orchestrator | 2025-05-05 00:25:13.403399 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:13.404934 | orchestrator | Monday 05 May 2025 00:25:13 +0000 (0:00:01.008) 0:00:08.481 ************ 2025-05-05 00:25:14.448701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrCb1jGbZ/HF8448y6BXzX0OGLmatxcdF+78C78ovJwxmut3HrvQEkxfJZbRxkErWeVzTi6DDMJK6PUaqj7FKnpMBM4K5UmcKEaH1R4sLs+MO5DGXNxVm0ONljimbQZ/ThPlVBpR40hvDicdNLcPKtSWxJOy5U9/cTV1LwAUWMyIS5DogRzywPyKzIDSUQrrHJf6Oy1yleB0w7MajKfcxFHC8AkbDMKEBdmD+DFcE6WrTVGdLk9woCZo1lzU3oKGck6rN+ESOdjEVoOIzECwq8nptIxqqub8mITIDSVvcWenfIX5gCt2W8i2BNAXSpSp7VJBml8EQydplf0b7OUO4KDmM+8skHvw3pXoc8UMj14s1mpjrEYe5AKbwHM5AsnEKqRxlKBzv8U+y4SV6UwdY7yv9KR31NjctOKT+5urFzKhvua0/50B3hV49GUn+4K2n2wXpx6RuGtGVXqDW0YnWTZeYlF/z2L3fNZ9CLaM5zqWIY9tJjGyEkhUDB3LrRKB0=) 2025-05-05 00:25:14.449065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPISjDSWk2DmarlsGI5KbRQe/QOzAhDaacHLRi+jdz6AWGvxUXI8ML2olBNq1sCn+hE6NGLVdGXsjunwUMfoVCw=) 2025-05-05 00:25:14.449847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKqx8m7mK/LauAaOMBnurUXK1RRv8xhcBy0NVB+NCwbN) 2025-05-05 00:25:14.450150 | orchestrator | 2025-05-05 00:25:14.450562 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:14.451118 | orchestrator | Monday 05 May 2025 00:25:14 +0000 (0:00:01.043) 0:00:09.525 ************ 2025-05-05 00:25:15.549825 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm4emYF1pYHxtjaBv+BLOymlWH2oCgbYMF5/cp95+Z80lIHolt6D38NJdNYi+6DkIpMqKg89qf2Jy7PzWLLTWCjhmam43vQnhlUpT8UDgzq7teNHi/W7QmNuhDZwLtIy6k9qWlmDIR7iS/c++ByjSu4cQ0GSpkHvE1lWhf37JgueOtK63VUsrH+6rPSy4jBjrc311YbtGlZ1WOWq7iC7lB9iDfvCo9D1r3V7sBmsH8JFrXOZtwDr2smjcfxM7M6lOo8wDcXEcdKDuhWKk1cmkan+QCz+DT9piKk5RGLzMirCrFKb7B4PyX4FG9sYNOsc2clbulveXQ4D9WEdEBxYJFOTHamHhh3C7GBUF7tdC4YVF7ehMw7rrg7Gi5K0nHPikqyUb4xwmXpJGAwatUwQinAUTZDK5y8UdKCR89XzW7f55oSZX7mQo1xhldif9gVFXJF1dY4iiqQdYw854IgIuNa/09CcAGSvltnuIEllnQsXnxruzqxOb3MziTQA8qdFc=) 2025-05-05 00:25:15.550116 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPpyTEHLQ37KmjLUAA6vzM0t0PnBoqo5wsnFIRnTBfB1yu49IsxFoA9aJn1m2iTd0SWLpMUZKN0juHkm/LUMfnU=) 2025-05-05 00:25:15.550155 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAQy6w6fXAnPvikYDIDW4dMfrzv9cpxfI82m/+m835bR) 2025-05-05 00:25:15.550173 | orchestrator | 2025-05-05 00:25:15.550195 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:16.623236 | orchestrator | Monday 05 May 2025 00:25:15 +0000 (0:00:01.104) 0:00:10.629 ************ 2025-05-05 00:25:16.623444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCUw/1jb+wcblgLt5QKvqAOkXXUKcrBuXw8xEE2EvRaLOkPHtRMFU/Ynvbk89kxXKI9mP2aocd2ZmZ3V2KzZ9UQ=) 2025-05-05 00:25:16.624047 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/tFJr/5Fj/iYaZxHOggRYGYXctwCVaznE9tPrGxxkifvaHu6gYvjeG+E1pe51jwDB0DiDcurwSZJzsgfKVs1EPyctYNP2xQ0WEx9YJKGiKCGmtfNKgvrLbWBn/VGw71ONQLE9UvVSubvr4zLU1RWYezMhdxJS94B2cCGCDm7vZvJYirnmXoDd5CCcLtOcq6K+x8rBG6elDSkTj5IqPZKdul3ixseY1JRWL9XYlilFD5ontH6vLSQlM6yv41NAIxEloTbJWiVMog6dncKyO9RcS7gDC2Kdq2Q4OnJkGI79Fgcqt5aF+WO1iGBhAUidTezKbBRuFZDPTYuPtZ4KKXzLNqbml6q+xDiGMY65KS8OjyDk/bjCcB7/Slb+CERiDR+s758hi4Rdxk9K2Q638RQ4Lc2TvD2QtuOQBZxpeCjgnXoTD+Ynm9SVs5bxrTqzH91KnfJHzm9qP7teU/O0rv111opyvTHbqsBRD8KKQFNI+XpBcNYZaFNOt/FMbC2C8WM=) 2025-05-05 00:25:16.624932 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDiw//dR+7Xq6J13Iu9P1Yg93gM1YRWwes0uezGDwC0P) 2025-05-05 00:25:16.625418 | orchestrator | 2025-05-05 00:25:16.625840 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:16.626280 | orchestrator | Monday 05 May 2025 00:25:16 +0000 (0:00:01.074) 0:00:11.704 ************ 2025-05-05 00:25:17.650130 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1E9PXEncEBgmqWux8wOKIwufXO7/2DRiakBhpeWrS469vx8fEtFqjoO/NzqG1f6Fc7e9iR4Z13gQL/j3etDO4SLKNFqr2fzqJ8H2ddTUEoP6ezUg8MGYyjArO6vZILHmVvOTBWpGSTyAw5KUaK37bA1+r5L4CarESCSc1ImQDbLfzgNkLC3hBpIN6cwXPtNyDZn78ZNMqsg94Q2vy7hRN4cL0FCBPQJQ0NdbTJ5CVv6dMx1IKJM5aGh5BbE6V5InhIsuGzekoU13hdLpt3bL5pJ6xi+WcOxq3uR4a/E35ucgGPWCQJSE2ZyQhNBYQ8copUomaCP1hCpF1r9sfzjh8Mg8EYOI61XG49GMpQ1ceYPFaL3vy2wek6+I3N9G3VEpjNQJ1ZWDEzcmp00VRmNcLN6IGHNLsKOpOYWEPoApFdAyw0IabMgL0OQUQoy0bSzKLq73q2Go/1vu+rIlVmndgUhfzx1OvHoYzM9Ng4msn6E0bOWbLivXtW2uYwWZqv/8=) 2025-05-05 00:25:17.650650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEAMp7sYiU47daXIW03B4M/RYj2toSFEVW0mDgyqDRruqHYpFI/Gb/o8FNVeWxKUiY5v30a9BEDPTJxO6N4TYrI=) 2025-05-05 00:25:17.650697 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK4PRpaerL61czKpAyn9sjLaOQNHziUh2J+2liyC6oaK) 2025-05-05 00:25:17.651121 | orchestrator | 2025-05-05 00:25:17.651533 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:17.652018 | orchestrator | Monday 05 May 2025 00:25:17 +0000 (0:00:01.027) 0:00:12.731 ************ 2025-05-05 00:25:18.658819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ULYOFl9nI9ZdJshYvVcUxgRFdRLfok8lmJjz30V08TcVaNRLfmgTpK02EavkeeJmSAGHbNRuPLMHPAwUSTbUxWHMo6ZSpiwvZibIj+sJur3kr/K9RN0BHrNAcbTjA3PSKdACTC4Ahgi9myJLUGV+LZuUtj+sIBbMyb1111Z73n62KJakUC+1zUrSRYnbBXUQlQYlczA8gDxfKZmipZCdHeGPgdPIuppH9YHrHaNeuNKds8FC0RxSjZZABGkjCFFdaail8FQ6fn3NW4oirnf8nLXVXb7fzpr69jbcHrwDFCV4o1PaQzusax1e7hquPut6xYUINCa452emDl2kL2A+4cNJFbTkMCFAz+mSaqV3aCa9Su8et1te509oGSWLDZ6rlx5YgKTEw8yc6BRaqXUgCtaqBwyYmc9qo771Zdk3FlqE7GoSU0jpJ8YheLPD1Z2jeGGp3On5sEwXoxAYG4MkYekWD5Em+X6bQIwl/WAOroyKjP85+dS/XohNG/fAPv8=) 2025-05-05 00:25:18.659539 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA8N9D5EJFY6N1/QTn333QLajSgrnANPLWFevdYdyqNKPtLkWfQ5EtO8qobqa00i8QTNHhDVWaGN4g3eCac+dck=) 2025-05-05 00:25:18.659585 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILOrqoQGBmVzuBLGCwfdhBWqOBHNGSb+TBvpXooGnK//) 2025-05-05 00:25:18.660532 | orchestrator | 2025-05-05 00:25:18.661518 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-05 00:25:18.662390 | orchestrator | Monday 05 May 2025 00:25:18 +0000 (0:00:01.006) 0:00:13.737 ************ 2025-05-05 00:25:23.901613 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-05 00:25:23.903147 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-05 00:25:23.903394 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-05 00:25:23.904120 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-05 00:25:23.905824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-05 00:25:23.906278 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-05 00:25:23.906828 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-05 00:25:23.907240 | orchestrator | 2025-05-05 00:25:23.908255 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-05 00:25:23.908947 | orchestrator | Monday 05 May 2025 00:25:23 +0000 (0:00:05.244) 0:00:18.982 ************ 2025-05-05 00:25:24.077765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-05 00:25:24.078840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-05 00:25:24.079982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-05 00:25:24.080298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-05 00:25:24.081002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-05 00:25:24.081812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-05 00:25:24.082466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-05 00:25:24.082706 | orchestrator | 2025-05-05 00:25:24.083613 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:24.084339 | orchestrator | Monday 05 May 2025 00:25:24 +0000 (0:00:00.175) 0:00:19.157 ************ 2025-05-05 00:25:25.120218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOHOhz/im5kPIoHF73fVr/K6aTMzhZMasjGsdx0JPMbb) 2025-05-05 00:25:25.122348 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNjFiIYDr2cFimyQiWrOIy6nyTrhf3/LVKhtElfQMXeZ/XrXPY8kJAvzMQMWDPxkGbitlaT374iG+eRs44HufMw+yGHMjuo1uSBtXIuleOqb68Ry9755/ycayIUgP+oJmhRbHl86wFKeYGfihYVQR9HV9sp7iyOHSPOK18t2zJ1p9MjAp6TrzKI+Bq+mNAjlvENyLrR59QGC1YNula2jNU8e7Z6QhKw5ipsPo9vZ5xN+lz1Sj6f95RdQ+14hzVYSAd1T0S91s2sL229B4Yj8IxiFxZvYuGVML/kj13IkKRWlhuxKX3/cmW9okk9oHg7o5fSZWzLS+CYxqCpdeR/Lsp6CgvRKlAkVYNLBOJm+jnyjqTmoeU0yNbzdgNxkSw0CCduLyrOnbyCdPZUwiwk7GAUGNZpi90ygP0yOrip9ygMDTHwTg+kmDnMBKwB05uyFaEP41zVvowzXXL1rCJwbfUZ97LZrOR0AzCMI481soietnQ8gRNn0DDiP5YgP07zn8=) 2025-05-05 00:25:25.122817 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBArjLXpWDvP3WYhsnoozPmeHr+cptR40PetH2dCNECC1f+5yulk5X3jc1ThCEInvaUK9rh5DoCQl48zZtD5Zgg0=) 2025-05-05 00:25:25.122902 | orchestrator | 2025-05-05 00:25:25.123590 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:25.124028 | orchestrator | Monday 05 May 2025 00:25:25 +0000 (0:00:01.043) 0:00:20.200 ************ 2025-05-05 00:25:26.174288 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDPFJShPuAi8PQT/rsh1IEHYMcWMr3kXa6BsTCB8MWX17wIsSCsdIlNKH2eMWxlLVqKz2uaBNKYgJgELhBVvMlI=) 2025-05-05 00:25:26.174578 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD3UKukc19GH6+tla1DnFiROss35dM0E3C/49jl6Bg/YPjym2y7/mPY/Olkcho3DGSBLBPTYAJSjgBpNGCx9klFzkFKKedu/iYVmt9j7eLT6dFi/CEATb/LFg/OCR8ItKOCg1C6OMxGdKCe6LMA3Rb4d7pmdxjt3dwNAnHiwWlmO3kbMDPD+AhGKCPdb5/56TyqS/fCyeF2fXQHzDNVhTXcz67JOtC8LEdwH3wUbJZUvtU1WFEHDtWH68JA914QtdrwQs235gC1plq37KXx9iTwDg07EFcBexGCv0FPBuipqyRbJEAkbHty7OdIMTxYbMeJgmygA4DEr99jWc7hklShoORQl/owlZMB5j+lm0bCf8KL+oSSYOIyz/R2KiSuCZD+2FV2RVRc0gcgmCRmmoCpekQySDPv7s04mPWjCi/AUtflfWOcUjn3UIaWDpbMK3RDXqOwKI/QeL4Ah9Ny2EPj3jyEQyLxRAbs7kFsUkmY0zUc0G8m7/aIfqf4tiYQjvk=) 2025-05-05 00:25:26.175447 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBIlJKNDcJXLo6T8z65fL8M/rFYfHbxZHgfzWF/0o/i/) 2025-05-05 00:25:26.176225 | orchestrator | 2025-05-05 00:25:26.176476 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:26.176512 | orchestrator | Monday 05 May 2025 00:25:26 +0000 (0:00:01.053) 0:00:21.254 ************ 2025-05-05 00:25:27.218830 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPISjDSWk2DmarlsGI5KbRQe/QOzAhDaacHLRi+jdz6AWGvxUXI8ML2olBNq1sCn+hE6NGLVdGXsjunwUMfoVCw=) 2025-05-05 00:25:27.219629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrCb1jGbZ/HF8448y6BXzX0OGLmatxcdF+78C78ovJwxmut3HrvQEkxfJZbRxkErWeVzTi6DDMJK6PUaqj7FKnpMBM4K5UmcKEaH1R4sLs+MO5DGXNxVm0ONljimbQZ/ThPlVBpR40hvDicdNLcPKtSWxJOy5U9/cTV1LwAUWMyIS5DogRzywPyKzIDSUQrrHJf6Oy1yleB0w7MajKfcxFHC8AkbDMKEBdmD+DFcE6WrTVGdLk9woCZo1lzU3oKGck6rN+ESOdjEVoOIzECwq8nptIxqqub8mITIDSVvcWenfIX5gCt2W8i2BNAXSpSp7VJBml8EQydplf0b7OUO4KDmM+8skHvw3pXoc8UMj14s1mpjrEYe5AKbwHM5AsnEKqRxlKBzv8U+y4SV6UwdY7yv9KR31NjctOKT+5urFzKhvua0/50B3hV49GUn+4K2n2wXpx6RuGtGVXqDW0YnWTZeYlF/z2L3fNZ9CLaM5zqWIY9tJjGyEkhUDB3LrRKB0=) 2025-05-05 00:25:27.219856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKqx8m7mK/LauAaOMBnurUXK1RRv8xhcBy0NVB+NCwbN) 2025-05-05 00:25:27.219891 | orchestrator | 2025-05-05 00:25:27.220571 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:27.221276 | orchestrator | Monday 05 May 2025 00:25:27 +0000 (0:00:01.043) 0:00:22.297 ************ 2025-05-05 00:25:28.277230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm4emYF1pYHxtjaBv+BLOymlWH2oCgbYMF5/cp95+Z80lIHolt6D38NJdNYi+6DkIpMqKg89qf2Jy7PzWLLTWCjhmam43vQnhlUpT8UDgzq7teNHi/W7QmNuhDZwLtIy6k9qWlmDIR7iS/c++ByjSu4cQ0GSpkHvE1lWhf37JgueOtK63VUsrH+6rPSy4jBjrc311YbtGlZ1WOWq7iC7lB9iDfvCo9D1r3V7sBmsH8JFrXOZtwDr2smjcfxM7M6lOo8wDcXEcdKDuhWKk1cmkan+QCz+DT9piKk5RGLzMirCrFKb7B4PyX4FG9sYNOsc2clbulveXQ4D9WEdEBxYJFOTHamHhh3C7GBUF7tdC4YVF7ehMw7rrg7Gi5K0nHPikqyUb4xwmXpJGAwatUwQinAUTZDK5y8UdKCR89XzW7f55oSZX7mQo1xhldif9gVFXJF1dY4iiqQdYw854IgIuNa/09CcAGSvltnuIEllnQsXnxruzqxOb3MziTQA8qdFc=) 2025-05-05 00:25:28.277501 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPpyTEHLQ37KmjLUAA6vzM0t0PnBoqo5wsnFIRnTBfB1yu49IsxFoA9aJn1m2iTd0SWLpMUZKN0juHkm/LUMfnU=) 2025-05-05 00:25:28.277537 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAQy6w6fXAnPvikYDIDW4dMfrzv9cpxfI82m/+m835bR) 2025-05-05 00:25:28.277556 | orchestrator | 2025-05-05 00:25:28.277578 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:28.277710 | orchestrator | Monday 05 May 2025 00:25:28 +0000 (0:00:01.059) 0:00:23.357 ************ 2025-05-05 00:25:29.326493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/tFJr/5Fj/iYaZxHOggRYGYXctwCVaznE9tPrGxxkifvaHu6gYvjeG+E1pe51jwDB0DiDcurwSZJzsgfKVs1EPyctYNP2xQ0WEx9YJKGiKCGmtfNKgvrLbWBn/VGw71ONQLE9UvVSubvr4zLU1RWYezMhdxJS94B2cCGCDm7vZvJYirnmXoDd5CCcLtOcq6K+x8rBG6elDSkTj5IqPZKdul3ixseY1JRWL9XYlilFD5ontH6vLSQlM6yv41NAIxEloTbJWiVMog6dncKyO9RcS7gDC2Kdq2Q4OnJkGI79Fgcqt5aF+WO1iGBhAUidTezKbBRuFZDPTYuPtZ4KKXzLNqbml6q+xDiGMY65KS8OjyDk/bjCcB7/Slb+CERiDR+s758hi4Rdxk9K2Q638RQ4Lc2TvD2QtuOQBZxpeCjgnXoTD+Ynm9SVs5bxrTqzH91KnfJHzm9qP7teU/O0rv111opyvTHbqsBRD8KKQFNI+XpBcNYZaFNOt/FMbC2C8WM=) 2025-05-05 00:25:29.327954 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCUw/1jb+wcblgLt5QKvqAOkXXUKcrBuXw8xEE2EvRaLOkPHtRMFU/Ynvbk89kxXKI9mP2aocd2ZmZ3V2KzZ9UQ=) 2025-05-05 00:25:29.329543 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDiw//dR+7Xq6J13Iu9P1Yg93gM1YRWwes0uezGDwC0P) 2025-05-05 00:25:29.330458 | orchestrator | 2025-05-05 00:25:29.330527 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:29.330884 | orchestrator | Monday 05 May 2025 00:25:29 +0000 (0:00:01.050) 0:00:24.407 ************ 2025-05-05 00:25:30.386871 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1E9PXEncEBgmqWux8wOKIwufXO7/2DRiakBhpeWrS469vx8fEtFqjoO/NzqG1f6Fc7e9iR4Z13gQL/j3etDO4SLKNFqr2fzqJ8H2ddTUEoP6ezUg8MGYyjArO6vZILHmVvOTBWpGSTyAw5KUaK37bA1+r5L4CarESCSc1ImQDbLfzgNkLC3hBpIN6cwXPtNyDZn78ZNMqsg94Q2vy7hRN4cL0FCBPQJQ0NdbTJ5CVv6dMx1IKJM5aGh5BbE6V5InhIsuGzekoU13hdLpt3bL5pJ6xi+WcOxq3uR4a/E35ucgGPWCQJSE2ZyQhNBYQ8copUomaCP1hCpF1r9sfzjh8Mg8EYOI61XG49GMpQ1ceYPFaL3vy2wek6+I3N9G3VEpjNQJ1ZWDEzcmp00VRmNcLN6IGHNLsKOpOYWEPoApFdAyw0IabMgL0OQUQoy0bSzKLq73q2Go/1vu+rIlVmndgUhfzx1OvHoYzM9Ng4msn6E0bOWbLivXtW2uYwWZqv/8=) 2025-05-05 00:25:30.387426 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK4PRpaerL61czKpAyn9sjLaOQNHziUh2J+2liyC6oaK) 2025-05-05 00:25:30.387580 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEAMp7sYiU47daXIW03B4M/RYj2toSFEVW0mDgyqDRruqHYpFI/Gb/o8FNVeWxKUiY5v30a9BEDPTJxO6N4TYrI=) 2025-05-05 00:25:30.388090 | orchestrator | 2025-05-05 00:25:30.388992 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-05 00:25:30.389432 | orchestrator | Monday 05 May 2025 00:25:30 +0000 (0:00:01.060) 0:00:25.468 ************ 2025-05-05 00:25:31.422935 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ULYOFl9nI9ZdJshYvVcUxgRFdRLfok8lmJjz30V08TcVaNRLfmgTpK02EavkeeJmSAGHbNRuPLMHPAwUSTbUxWHMo6ZSpiwvZibIj+sJur3kr/K9RN0BHrNAcbTjA3PSKdACTC4Ahgi9myJLUGV+LZuUtj+sIBbMyb1111Z73n62KJakUC+1zUrSRYnbBXUQlQYlczA8gDxfKZmipZCdHeGPgdPIuppH9YHrHaNeuNKds8FC0RxSjZZABGkjCFFdaail8FQ6fn3NW4oirnf8nLXVXb7fzpr69jbcHrwDFCV4o1PaQzusax1e7hquPut6xYUINCa452emDl2kL2A+4cNJFbTkMCFAz+mSaqV3aCa9Su8et1te509oGSWLDZ6rlx5YgKTEw8yc6BRaqXUgCtaqBwyYmc9qo771Zdk3FlqE7GoSU0jpJ8YheLPD1Z2jeGGp3On5sEwXoxAYG4MkYekWD5Em+X6bQIwl/WAOroyKjP85+dS/XohNG/fAPv8=) 2025-05-05 00:25:31.423139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA8N9D5EJFY6N1/QTn333QLajSgrnANPLWFevdYdyqNKPtLkWfQ5EtO8qobqa00i8QTNHhDVWaGN4g3eCac+dck=) 2025-05-05 00:25:31.423252 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILOrqoQGBmVzuBLGCwfdhBWqOBHNGSb+TBvpXooGnK//) 2025-05-05 00:25:31.423433 | orchestrator | 2025-05-05 00:25:31.423464 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-05 00:25:31.424198 | orchestrator | Monday 05 May 2025 00:25:31 +0000 (0:00:01.035) 0:00:26.503 ************ 2025-05-05 00:25:31.590554 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-05 00:25:31.590847 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-05 00:25:31.591887 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-05 00:25:31.592425 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-05 00:25:31.592760 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-05 00:25:31.593172 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-05 00:25:31.594837 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-05 00:25:31.597454 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:25:31.597820 | orchestrator | 2025-05-05 00:25:31.598473 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-05 00:25:31.599274 | orchestrator | Monday 05 May 2025 00:25:31 +0000 (0:00:00.168) 0:00:26.672 ************ 2025-05-05 00:25:31.645579 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:25:31.645765 | orchestrator | 2025-05-05 00:25:31.645797 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-05 00:25:31.646244 | orchestrator | Monday 05 May 2025 00:25:31 +0000 (0:00:00.055) 0:00:26.728 ************ 2025-05-05 00:25:31.714328 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:25:31.715083 | orchestrator | 2025-05-05 00:25:31.716151 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-05 00:25:31.716886 | orchestrator | Monday 05 May 2025 00:25:31 +0000 (0:00:00.068) 0:00:26.796 ************ 2025-05-05 00:25:32.417020 | orchestrator | changed: [testbed-manager] 2025-05-05 00:25:32.417950 | orchestrator | 2025-05-05 00:25:32.418000 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:25:32.419059 | orchestrator | 2025-05-05 00:25:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:25:32.419757 | orchestrator | 2025-05-05 00:25:32 | INFO  | Please wait and do not abort execution. 2025-05-05 00:25:32.419790 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:25:32.420758 | orchestrator | 2025-05-05 00:25:32.421729 | orchestrator | Monday 05 May 2025 00:25:32 +0000 (0:00:00.700) 0:00:27.497 ************ 2025-05-05 00:25:32.422268 | orchestrator | =============================================================================== 2025-05-05 00:25:32.423087 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.04s 2025-05-05 00:25:32.423768 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.24s 2025-05-05 00:25:32.425101 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-05-05 00:25:32.425625 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-05 00:25:32.426162 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-05 00:25:32.426838 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-05 00:25:32.428384 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-05 00:25:32.428649 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-05 00:25:32.430527 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-05 00:25:32.430928 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-05 00:25:32.432072 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-05 00:25:32.433105 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-05 00:25:32.433440 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-05 00:25:32.434163 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-05 00:25:32.434642 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-05 00:25:32.435276 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-05 00:25:32.435602 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2025-05-05 00:25:32.436160 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-05-05 00:25:32.436420 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-05-05 00:25:32.436967 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-05 00:25:32.769479 | orchestrator | + osism apply squid 2025-05-05 00:25:34.152817 | orchestrator | 2025-05-05 00:25:34 | INFO  | Task 2123ff3d-e306-4ece-9f52-67f520e6621c (squid) was prepared for execution. 2025-05-05 00:25:37.064825 | orchestrator | 2025-05-05 00:25:34 | INFO  | It takes a moment until task 2123ff3d-e306-4ece-9f52-67f520e6621c (squid) has been started and output is visible here. 2025-05-05 00:25:37.064980 | orchestrator | 2025-05-05 00:25:37.065420 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-05 00:25:37.066414 | orchestrator | 2025-05-05 00:25:37.067364 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-05 00:25:37.069040 | orchestrator | Monday 05 May 2025 00:25:37 +0000 (0:00:00.101) 0:00:00.101 ************ 2025-05-05 00:25:37.150177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-05 00:25:37.150950 | orchestrator | 2025-05-05 00:25:37.151984 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-05 00:25:37.152813 | orchestrator | Monday 05 May 2025 00:25:37 +0000 (0:00:00.088) 0:00:00.190 ************ 2025-05-05 00:25:38.507780 | orchestrator | ok: [testbed-manager] 2025-05-05 00:25:38.508282 | orchestrator | 2025-05-05 00:25:38.508375 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-05 00:25:38.508775 | orchestrator | Monday 05 May 2025 00:25:38 +0000 (0:00:01.355) 0:00:01.545 ************ 2025-05-05 00:25:39.685600 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-05 00:25:39.686438 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-05 00:25:39.687264 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-05 00:25:39.688048 | orchestrator | 2025-05-05 00:25:39.688900 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-05 00:25:39.689222 | orchestrator | Monday 05 May 2025 00:25:39 +0000 (0:00:01.178) 0:00:02.724 ************ 2025-05-05 00:25:40.745356 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-05 00:25:40.746369 | orchestrator | 2025-05-05 00:25:40.746833 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-05 00:25:40.747538 | orchestrator | Monday 05 May 2025 00:25:40 +0000 (0:00:01.060) 0:00:03.784 ************ 2025-05-05 00:25:41.096130 | orchestrator | ok: [testbed-manager] 2025-05-05 00:25:41.096539 | orchestrator | 2025-05-05 00:25:41.096830 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-05 00:25:41.097659 | orchestrator | Monday 05 May 2025 00:25:41 +0000 (0:00:00.350) 0:00:04.135 ************ 2025-05-05 00:25:42.049665 | orchestrator | changed: [testbed-manager] 2025-05-05 00:25:42.050235 | orchestrator | 2025-05-05 00:25:42.051035 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-05 00:25:42.051549 | orchestrator | Monday 05 May 2025 00:25:42 +0000 (0:00:00.953) 0:00:05.088 ************ 2025-05-05 00:26:13.723128 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-05 00:26:26.029501 | orchestrator | ok: [testbed-manager] 2025-05-05 00:26:26.029688 | orchestrator | 2025-05-05 00:26:26.029724 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-05 00:26:26.029752 | orchestrator | Monday 05 May 2025 00:26:13 +0000 (0:00:31.666) 0:00:36.755 ************ 2025-05-05 00:26:26.029799 | orchestrator | changed: [testbed-manager] 2025-05-05 00:26:26.031144 | orchestrator | 2025-05-05 00:26:26.031214 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-05 00:26:26.032126 | orchestrator | Monday 05 May 2025 00:26:26 +0000 (0:00:12.311) 0:00:49.066 ************ 2025-05-05 00:27:26.115267 | orchestrator | Pausing for 60 seconds 2025-05-05 00:27:26.170718 | orchestrator | changed: [testbed-manager] 2025-05-05 00:27:26.170841 | orchestrator | 2025-05-05 00:27:26.170863 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-05 00:27:26.170880 | orchestrator | Monday 05 May 2025 00:27:26 +0000 (0:01:00.081) 0:01:49.148 ************ 2025-05-05 00:27:26.170914 | orchestrator | ok: [testbed-manager] 2025-05-05 00:27:26.171059 | orchestrator | 2025-05-05 00:27:26.172228 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-05 00:27:26.741676 | orchestrator | Monday 05 May 2025 00:27:26 +0000 (0:00:00.062) 0:01:49.210 ************ 2025-05-05 00:27:26.741855 | orchestrator | changed: [testbed-manager] 2025-05-05 00:27:26.744103 | orchestrator | 2025-05-05 00:27:26.744184 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:27:26.744610 | orchestrator | 2025-05-05 00:27:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:27:26.744640 | orchestrator | 2025-05-05 00:27:26 | INFO  | Please wait and do not abort execution. 2025-05-05 00:27:26.744662 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:27:26.745287 | orchestrator | 2025-05-05 00:27:26.746012 | orchestrator | Monday 05 May 2025 00:27:26 +0000 (0:00:00.571) 0:01:49.781 ************ 2025-05-05 00:27:26.746319 | orchestrator | =============================================================================== 2025-05-05 00:27:26.746882 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-05 00:27:26.747382 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.67s 2025-05-05 00:27:26.748878 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.31s 2025-05-05 00:27:26.749700 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.36s 2025-05-05 00:27:26.750412 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2025-05-05 00:27:26.751133 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-05-05 00:27:26.751708 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-05-05 00:27:26.752408 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.57s 2025-05-05 00:27:26.753062 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-05-05 00:27:26.753649 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-05 00:27:26.754583 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-05 00:27:27.135820 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-05 00:27:27.139468 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-05 00:27:27.139557 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-05 00:27:27.196005 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-05 00:27:27.199935 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-05 00:27:27.200000 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-05 00:27:27.200033 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-05 00:27:27.205776 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-05 00:27:27.210735 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-05 00:27:28.625098 | orchestrator | 2025-05-05 00:27:28 | INFO  | Task a8282f00-e170-44a4-b1e6-5de010029ba5 (operator) was prepared for execution. 2025-05-05 00:27:31.427072 | orchestrator | 2025-05-05 00:27:28 | INFO  | It takes a moment until task a8282f00-e170-44a4-b1e6-5de010029ba5 (operator) has been started and output is visible here. 2025-05-05 00:27:31.427225 | orchestrator | 2025-05-05 00:27:31.430435 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-05 00:27:31.431294 | orchestrator | 2025-05-05 00:27:31.432148 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-05 00:27:31.432962 | orchestrator | Monday 05 May 2025 00:27:31 +0000 (0:00:00.063) 0:00:00.063 ************ 2025-05-05 00:27:35.514099 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:35.514808 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:35.514847 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:27:35.514872 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:27:35.515247 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:35.515885 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:27:35.518289 | orchestrator | 2025-05-05 00:27:35.518690 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-05 00:27:35.519001 | orchestrator | Monday 05 May 2025 00:27:35 +0000 (0:00:04.086) 0:00:04.149 ************ 2025-05-05 00:27:36.271076 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:27:36.273736 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:36.273948 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:36.274945 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:27:36.276452 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:27:36.276914 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:36.277751 | orchestrator | 2025-05-05 00:27:36.278475 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-05 00:27:36.279811 | orchestrator | 2025-05-05 00:27:36.280629 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-05 00:27:36.287073 | orchestrator | Monday 05 May 2025 00:27:36 +0000 (0:00:00.757) 0:00:04.907 ************ 2025-05-05 00:27:36.343037 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:27:36.367814 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:27:36.392238 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:27:36.431454 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:36.431564 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:36.431590 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:36.434859 | orchestrator | 2025-05-05 00:27:36.434914 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-05 00:27:36.435154 | orchestrator | Monday 05 May 2025 00:27:36 +0000 (0:00:00.160) 0:00:05.067 ************ 2025-05-05 00:27:36.515366 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:27:36.533409 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:27:36.588996 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:27:36.589652 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:36.590795 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:36.591731 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:36.591945 | orchestrator | 2025-05-05 00:27:36.592766 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-05 00:27:36.593538 | orchestrator | Monday 05 May 2025 00:27:36 +0000 (0:00:00.157) 0:00:05.224 ************ 2025-05-05 00:27:37.188286 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:37.188501 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:37.189077 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:37.189787 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:37.190443 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:37.191024 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:37.191321 | orchestrator | 2025-05-05 00:27:37.192251 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-05 00:27:37.192406 | orchestrator | Monday 05 May 2025 00:27:37 +0000 (0:00:00.599) 0:00:05.824 ************ 2025-05-05 00:27:38.014879 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:38.015078 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:38.015103 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:38.015127 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:38.017596 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:38.018112 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:38.019252 | orchestrator | 2025-05-05 00:27:38.020682 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-05 00:27:38.022056 | orchestrator | Monday 05 May 2025 00:27:38 +0000 (0:00:00.823) 0:00:06.648 ************ 2025-05-05 00:27:39.210620 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-05 00:27:39.210802 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-05 00:27:39.211552 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-05 00:27:39.212042 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-05 00:27:39.212077 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-05 00:27:39.212830 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-05 00:27:39.213295 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-05 00:27:39.213689 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-05 00:27:39.217058 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-05 00:27:39.217889 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-05 00:27:39.217920 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-05 00:27:39.217937 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-05 00:27:39.217953 | orchestrator | 2025-05-05 00:27:39.217975 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-05 00:27:39.218889 | orchestrator | Monday 05 May 2025 00:27:39 +0000 (0:00:01.197) 0:00:07.846 ************ 2025-05-05 00:27:40.497908 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:40.498163 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:40.498910 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:40.500568 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:40.500741 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:40.502118 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:40.505917 | orchestrator | 2025-05-05 00:27:41.681247 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-05 00:27:41.681394 | orchestrator | Monday 05 May 2025 00:27:40 +0000 (0:00:01.285) 0:00:09.132 ************ 2025-05-05 00:27:41.681434 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-05 00:27:41.684027 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-05 00:27:41.795704 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-05 00:27:41.795860 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-05 00:27:41.795935 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-05 00:27:41.795954 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-05 00:27:41.795969 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-05 00:27:41.795983 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-05 00:27:41.795997 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-05 00:27:41.796016 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-05 00:27:41.796886 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-05 00:27:41.797103 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-05 00:27:41.797917 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-05 00:27:41.798286 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-05 00:27:41.798652 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-05 00:27:41.798919 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-05 00:27:41.799258 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-05 00:27:41.799673 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-05 00:27:41.800110 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-05 00:27:41.800595 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-05 00:27:41.800918 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-05 00:27:41.801238 | orchestrator | 2025-05-05 00:27:41.803068 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-05 00:27:42.456976 | orchestrator | Monday 05 May 2025 00:27:41 +0000 (0:00:01.295) 0:00:10.428 ************ 2025-05-05 00:27:42.457174 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:42.457354 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:42.457420 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:42.457988 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:42.459978 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:42.461530 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:42.463006 | orchestrator | 2025-05-05 00:27:42.463973 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-05 00:27:42.464005 | orchestrator | Monday 05 May 2025 00:27:42 +0000 (0:00:00.658) 0:00:11.086 ************ 2025-05-05 00:27:42.538508 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:27:42.567460 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:27:42.593364 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:27:42.651079 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:27:42.651282 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:27:42.652031 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:27:42.652637 | orchestrator | 2025-05-05 00:27:42.653396 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-05 00:27:42.653588 | orchestrator | Monday 05 May 2025 00:27:42 +0000 (0:00:00.200) 0:00:11.286 ************ 2025-05-05 00:27:43.369446 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-05 00:27:43.370005 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 00:27:43.370492 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:43.371110 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:43.374112 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-05 00:27:43.374911 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:43.374941 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-05 00:27:43.374959 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-05 00:27:43.374976 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:43.374998 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:43.376053 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-05 00:27:43.376240 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:43.377203 | orchestrator | 2025-05-05 00:27:43.377973 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-05 00:27:43.378866 | orchestrator | Monday 05 May 2025 00:27:43 +0000 (0:00:00.718) 0:00:12.005 ************ 2025-05-05 00:27:43.421345 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:27:43.452037 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:27:43.494448 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:27:43.524097 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:27:43.571441 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:27:43.572862 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:27:43.574155 | orchestrator | 2025-05-05 00:27:43.577163 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-05 00:27:43.577368 | orchestrator | Monday 05 May 2025 00:27:43 +0000 (0:00:00.202) 0:00:12.207 ************ 2025-05-05 00:27:43.661226 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:27:43.715779 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:27:43.745606 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:27:43.784047 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:27:43.819835 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:27:43.820025 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:27:43.820047 | orchestrator | 2025-05-05 00:27:43.820068 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-05 00:27:43.820726 | orchestrator | Monday 05 May 2025 00:27:43 +0000 (0:00:00.247) 0:00:12.455 ************ 2025-05-05 00:27:43.889859 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:27:43.916674 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:27:43.940233 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:27:44.001823 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:27:44.008099 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:27:44.009589 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:27:44.009987 | orchestrator | 2025-05-05 00:27:44.010822 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-05 00:27:44.011326 | orchestrator | Monday 05 May 2025 00:27:43 +0000 (0:00:00.180) 0:00:12.636 ************ 2025-05-05 00:27:44.719767 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:44.721362 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:44.721417 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:44.722867 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:44.723928 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:44.726963 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:44.815229 | orchestrator | 2025-05-05 00:27:44.815350 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-05 00:27:44.815369 | orchestrator | Monday 05 May 2025 00:27:44 +0000 (0:00:00.717) 0:00:13.353 ************ 2025-05-05 00:27:44.815400 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:27:44.835992 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:27:44.934456 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:27:44.935498 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:27:44.935953 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:27:44.936301 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:27:44.937009 | orchestrator | 2025-05-05 00:27:44.937775 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:27:44.938252 | orchestrator | 2025-05-05 00:27:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:27:44.938535 | orchestrator | 2025-05-05 00:27:44 | INFO  | Please wait and do not abort execution. 2025-05-05 00:27:44.939240 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:27:44.939917 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:27:44.940371 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:27:44.941497 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:27:44.941718 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:27:44.942260 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:27:44.942525 | orchestrator | 2025-05-05 00:27:44.943447 | orchestrator | Monday 05 May 2025 00:27:44 +0000 (0:00:00.218) 0:00:13.571 ************ 2025-05-05 00:27:44.944454 | orchestrator | =============================================================================== 2025-05-05 00:27:44.944550 | orchestrator | Gathering Facts --------------------------------------------------------- 4.09s 2025-05-05 00:27:44.944717 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2025-05-05 00:27:44.945225 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-05-05 00:27:44.945800 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2025-05-05 00:27:44.946432 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-05-05 00:27:44.947268 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-05-05 00:27:44.947631 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-05-05 00:27:44.948295 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2025-05-05 00:27:44.948582 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.66s 2025-05-05 00:27:44.949221 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-05-05 00:27:44.949795 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.25s 2025-05-05 00:27:44.950137 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-05-05 00:27:44.950679 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2025-05-05 00:27:44.951248 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-05-05 00:27:44.951545 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-05-05 00:27:44.952284 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-05-05 00:27:44.952595 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-05-05 00:27:45.366321 | orchestrator | + osism apply --environment custom facts 2025-05-05 00:27:46.724122 | orchestrator | 2025-05-05 00:27:46 | INFO  | Trying to run play facts in environment custom 2025-05-05 00:27:46.772514 | orchestrator | 2025-05-05 00:27:46 | INFO  | Task 9033bfce-66d1-43c4-b28d-6af2f9c2c475 (facts) was prepared for execution. 2025-05-05 00:27:49.745465 | orchestrator | 2025-05-05 00:27:46 | INFO  | It takes a moment until task 9033bfce-66d1-43c4-b28d-6af2f9c2c475 (facts) has been started and output is visible here. 2025-05-05 00:27:49.745646 | orchestrator | 2025-05-05 00:27:49.746380 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-05 00:27:49.746520 | orchestrator | 2025-05-05 00:27:49.747147 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-05 00:27:49.750332 | orchestrator | Monday 05 May 2025 00:27:49 +0000 (0:00:00.080) 0:00:00.080 ************ 2025-05-05 00:27:50.973444 | orchestrator | ok: [testbed-manager] 2025-05-05 00:27:52.013861 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:52.015254 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:52.016002 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:52.018163 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:52.019108 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:52.020987 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:52.021408 | orchestrator | 2025-05-05 00:27:52.022506 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-05 00:27:52.031119 | orchestrator | Monday 05 May 2025 00:27:52 +0000 (0:00:02.270) 0:00:02.350 ************ 2025-05-05 00:27:53.127046 | orchestrator | ok: [testbed-manager] 2025-05-05 00:27:54.040277 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:54.041560 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:54.042410 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:27:54.043280 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:54.044254 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:27:54.046069 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:27:54.046830 | orchestrator | 2025-05-05 00:27:54.047114 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-05 00:27:54.047638 | orchestrator | 2025-05-05 00:27:54.048376 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-05 00:27:54.048926 | orchestrator | Monday 05 May 2025 00:27:54 +0000 (0:00:02.019) 0:00:04.369 ************ 2025-05-05 00:27:54.146006 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:54.147841 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:54.149068 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:54.150325 | orchestrator | 2025-05-05 00:27:54.152829 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-05 00:27:54.271680 | orchestrator | Monday 05 May 2025 00:27:54 +0000 (0:00:00.113) 0:00:04.483 ************ 2025-05-05 00:27:54.271808 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:54.275126 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:54.275803 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:54.277618 | orchestrator | 2025-05-05 00:27:54.280331 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-05 00:27:54.281358 | orchestrator | Monday 05 May 2025 00:27:54 +0000 (0:00:00.125) 0:00:04.609 ************ 2025-05-05 00:27:54.390966 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:54.394782 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:54.398226 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:54.398256 | orchestrator | 2025-05-05 00:27:54.398279 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-05 00:27:54.398582 | orchestrator | Monday 05 May 2025 00:27:54 +0000 (0:00:00.119) 0:00:04.728 ************ 2025-05-05 00:27:54.528620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:27:54.974303 | orchestrator | 2025-05-05 00:27:54.974426 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-05 00:27:54.974446 | orchestrator | Monday 05 May 2025 00:27:54 +0000 (0:00:00.134) 0:00:04.863 ************ 2025-05-05 00:27:54.974478 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:54.974900 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:54.975535 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:54.976234 | orchestrator | 2025-05-05 00:27:54.976683 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-05 00:27:54.978358 | orchestrator | Monday 05 May 2025 00:27:54 +0000 (0:00:00.446) 0:00:05.310 ************ 2025-05-05 00:27:55.073164 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:27:55.073553 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:27:55.078920 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:27:55.079659 | orchestrator | 2025-05-05 00:27:55.079709 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-05 00:27:55.079737 | orchestrator | Monday 05 May 2025 00:27:55 +0000 (0:00:00.099) 0:00:05.410 ************ 2025-05-05 00:27:56.028893 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:56.029105 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:56.029133 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:56.029357 | orchestrator | 2025-05-05 00:27:56.030011 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-05 00:27:56.032438 | orchestrator | Monday 05 May 2025 00:27:56 +0000 (0:00:00.951) 0:00:06.361 ************ 2025-05-05 00:27:56.560218 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:27:56.563866 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:27:56.563948 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:27:56.564015 | orchestrator | 2025-05-05 00:27:56.564040 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-05 00:27:56.564525 | orchestrator | Monday 05 May 2025 00:27:56 +0000 (0:00:00.536) 0:00:06.897 ************ 2025-05-05 00:27:57.598554 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:27:57.601219 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:27:57.601887 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:27:57.601935 | orchestrator | 2025-05-05 00:27:57.602664 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-05 00:27:57.603573 | orchestrator | Monday 05 May 2025 00:27:57 +0000 (0:00:01.036) 0:00:07.934 ************ 2025-05-05 00:28:10.854383 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:10.855065 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:10.855113 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:10.855132 | orchestrator | 2025-05-05 00:28:10.855151 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-05 00:28:10.855238 | orchestrator | Monday 05 May 2025 00:28:10 +0000 (0:00:13.246) 0:00:21.181 ************ 2025-05-05 00:28:10.895608 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:28:10.929347 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:28:10.929631 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:28:10.929665 | orchestrator | 2025-05-05 00:28:10.929682 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-05 00:28:10.929704 | orchestrator | Monday 05 May 2025 00:28:10 +0000 (0:00:00.086) 0:00:21.267 ************ 2025-05-05 00:28:18.477263 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:18.482252 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:18.482309 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:18.482373 | orchestrator | 2025-05-05 00:28:18.483178 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-05 00:28:18.483893 | orchestrator | Monday 05 May 2025 00:28:18 +0000 (0:00:07.546) 0:00:28.814 ************ 2025-05-05 00:28:18.898552 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:18.901130 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:18.901203 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:18.901761 | orchestrator | 2025-05-05 00:28:18.903212 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-05 00:28:18.903833 | orchestrator | Monday 05 May 2025 00:28:18 +0000 (0:00:00.421) 0:00:29.235 ************ 2025-05-05 00:28:22.411032 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-05 00:28:22.412569 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-05 00:28:22.412630 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-05 00:28:22.412658 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-05 00:28:22.414294 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-05 00:28:22.414455 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-05 00:28:22.414487 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-05 00:28:22.414552 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-05 00:28:22.415316 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-05 00:28:22.415345 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-05 00:28:22.415749 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-05 00:28:22.416004 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-05 00:28:22.416032 | orchestrator | 2025-05-05 00:28:22.418068 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-05 00:28:22.418961 | orchestrator | Monday 05 May 2025 00:28:22 +0000 (0:00:03.511) 0:00:32.747 ************ 2025-05-05 00:28:23.593527 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:23.593942 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:23.593986 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:23.594916 | orchestrator | 2025-05-05 00:28:23.596177 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-05 00:28:23.597355 | orchestrator | 2025-05-05 00:28:23.597442 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-05 00:28:23.598509 | orchestrator | Monday 05 May 2025 00:28:23 +0000 (0:00:01.182) 0:00:33.929 ************ 2025-05-05 00:28:25.317510 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:28.701703 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:28.701909 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:28.702494 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:28.703035 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:28.703773 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:28.704201 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:28.705028 | orchestrator | 2025-05-05 00:28:28.706216 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:28:28.706582 | orchestrator | 2025-05-05 00:28:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:28:28.706610 | orchestrator | 2025-05-05 00:28:28 | INFO  | Please wait and do not abort execution. 2025-05-05 00:28:28.706627 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:28:28.707528 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:28:28.707980 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:28:28.708764 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:28:28.709416 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:28:28.710176 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:28:28.710786 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:28:28.711349 | orchestrator | 2025-05-05 00:28:28.711814 | orchestrator | Monday 05 May 2025 00:28:28 +0000 (0:00:05.107) 0:00:39.037 ************ 2025-05-05 00:28:28.712663 | orchestrator | =============================================================================== 2025-05-05 00:28:28.713015 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.25s 2025-05-05 00:28:28.713492 | orchestrator | Install required packages (Debian) -------------------------------------- 7.55s 2025-05-05 00:28:28.713897 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.11s 2025-05-05 00:28:28.714259 | orchestrator | Copy fact files --------------------------------------------------------- 3.51s 2025-05-05 00:28:28.714658 | orchestrator | Create custom facts directory ------------------------------------------- 2.27s 2025-05-05 00:28:28.714937 | orchestrator | Copy fact file ---------------------------------------------------------- 2.02s 2025-05-05 00:28:28.715399 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.18s 2025-05-05 00:28:28.715757 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-05-05 00:28:28.716065 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.95s 2025-05-05 00:28:28.716349 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.54s 2025-05-05 00:28:28.716737 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-05-05 00:28:28.716974 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-05-05 00:28:28.717331 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-05-05 00:28:28.717949 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.13s 2025-05-05 00:28:28.718724 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.12s 2025-05-05 00:28:28.719079 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-05-05 00:28:28.719575 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-05-05 00:28:28.719854 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-05-05 00:28:29.073505 | orchestrator | + osism apply bootstrap 2025-05-05 00:28:30.451075 | orchestrator | 2025-05-05 00:28:30 | INFO  | Task 317d110b-6647-4840-9ea0-fa7a91920c5c (bootstrap) was prepared for execution. 2025-05-05 00:28:33.194293 | orchestrator | 2025-05-05 00:28:30 | INFO  | It takes a moment until task 317d110b-6647-4840-9ea0-fa7a91920c5c (bootstrap) has been started and output is visible here. 2025-05-05 00:28:33.194476 | orchestrator | 2025-05-05 00:28:33.194555 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-05 00:28:33.194957 | orchestrator | 2025-05-05 00:28:33.196527 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-05 00:28:33.197188 | orchestrator | Monday 05 May 2025 00:28:33 +0000 (0:00:00.094) 0:00:00.094 ************ 2025-05-05 00:28:33.271212 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:33.298306 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:33.316847 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:33.388378 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:33.389015 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:33.390932 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:33.393710 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:33.394303 | orchestrator | 2025-05-05 00:28:33.395328 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-05 00:28:33.396545 | orchestrator | 2025-05-05 00:28:33.397226 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-05 00:28:33.398963 | orchestrator | Monday 05 May 2025 00:28:33 +0000 (0:00:00.198) 0:00:00.293 ************ 2025-05-05 00:28:37.068968 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:37.069229 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:37.069966 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:37.070349 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:37.071272 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:37.072073 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:37.072659 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:37.073518 | orchestrator | 2025-05-05 00:28:37.074009 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-05 00:28:37.074935 | orchestrator | 2025-05-05 00:28:37.076292 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-05 00:28:37.076948 | orchestrator | Monday 05 May 2025 00:28:37 +0000 (0:00:03.679) 0:00:03.973 ************ 2025-05-05 00:28:37.150650 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-05 00:28:37.150803 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-05 00:28:37.186376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-05 00:28:37.186508 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-05 00:28:37.186601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:28:37.186829 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-05 00:28:37.186874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:28:37.189083 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-05 00:28:37.189230 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:28:37.189650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:28:37.217343 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-05 00:28:37.217453 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:28:37.217751 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-05 00:28:37.218165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:28:37.218245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:28:37.448090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:28:37.448302 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:28:37.449517 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-05 00:28:37.450800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-05 00:28:37.451631 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:28:37.452822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:28:37.453315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:28:37.454399 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-05 00:28:37.455106 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:28:37.455935 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:37.457001 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:28:37.457214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:28:37.457943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:28:37.458763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:28:37.459124 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-05 00:28:37.459986 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:28:37.460236 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-05 00:28:37.460713 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:28:37.461078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:28:37.461622 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:28:37.461905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:28:37.462563 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-05 00:28:37.463298 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-05 00:28:37.464622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:28:37.468175 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-05 00:28:37.470127 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:28:37.470709 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:28:37.471450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:28:37.471753 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-05 00:28:37.472313 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-05 00:28:37.472869 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:28:37.473651 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:28:37.474116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-05 00:28:37.474732 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-05 00:28:37.475251 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-05 00:28:37.475568 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-05 00:28:37.476109 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-05 00:28:37.476806 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-05 00:28:37.477326 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:28:37.477755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-05 00:28:37.477978 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:28:37.478432 | orchestrator | 2025-05-05 00:28:37.478914 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-05 00:28:37.479407 | orchestrator | 2025-05-05 00:28:37.479961 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-05 00:28:37.480256 | orchestrator | Monday 05 May 2025 00:28:37 +0000 (0:00:00.378) 0:00:04.351 ************ 2025-05-05 00:28:37.500320 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:37.523113 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:37.540944 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:37.560343 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:37.616495 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:37.616694 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:37.616719 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:37.616734 | orchestrator | 2025-05-05 00:28:37.616756 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-05 00:28:37.617304 | orchestrator | Monday 05 May 2025 00:28:37 +0000 (0:00:00.166) 0:00:04.517 ************ 2025-05-05 00:28:39.793481 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:39.796029 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:39.796099 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:39.797030 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:39.798089 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:39.798807 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:39.799761 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:39.800606 | orchestrator | 2025-05-05 00:28:39.801423 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-05 00:28:39.802010 | orchestrator | Monday 05 May 2025 00:28:39 +0000 (0:00:02.177) 0:00:06.695 ************ 2025-05-05 00:28:40.949428 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:40.950638 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:40.951845 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:40.953972 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:40.954115 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:40.955427 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:40.956268 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:40.957360 | orchestrator | 2025-05-05 00:28:40.957921 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-05 00:28:40.958637 | orchestrator | Monday 05 May 2025 00:28:40 +0000 (0:00:01.153) 0:00:07.849 ************ 2025-05-05 00:28:41.213843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:28:41.214639 | orchestrator | 2025-05-05 00:28:41.216246 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-05 00:28:41.216382 | orchestrator | Monday 05 May 2025 00:28:41 +0000 (0:00:00.266) 0:00:08.115 ************ 2025-05-05 00:28:43.304787 | orchestrator | changed: [testbed-manager] 2025-05-05 00:28:43.305206 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:43.305269 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:43.305936 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:43.307032 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:43.307716 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:43.307856 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:43.308704 | orchestrator | 2025-05-05 00:28:43.309707 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-05 00:28:43.310162 | orchestrator | Monday 05 May 2025 00:28:43 +0000 (0:00:02.082) 0:00:10.197 ************ 2025-05-05 00:28:43.360803 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:43.545918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:28:43.547699 | orchestrator | 2025-05-05 00:28:43.547802 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-05 00:28:43.555510 | orchestrator | Monday 05 May 2025 00:28:43 +0000 (0:00:00.251) 0:00:10.449 ************ 2025-05-05 00:28:44.703715 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:44.703892 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:44.703920 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:44.704753 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:44.706206 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:44.707233 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:44.709478 | orchestrator | 2025-05-05 00:28:44.710190 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-05 00:28:44.711033 | orchestrator | Monday 05 May 2025 00:28:44 +0000 (0:00:01.152) 0:00:11.602 ************ 2025-05-05 00:28:44.754180 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:45.299116 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:45.299366 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:45.300432 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:45.301947 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:45.303246 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:45.304016 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:45.304658 | orchestrator | 2025-05-05 00:28:45.305367 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-05 00:28:45.305866 | orchestrator | Monday 05 May 2025 00:28:45 +0000 (0:00:00.600) 0:00:12.202 ************ 2025-05-05 00:28:45.388583 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:28:45.411222 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:28:45.432566 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:28:45.721702 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:28:45.722715 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:28:45.722760 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:28:45.725207 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:45.726276 | orchestrator | 2025-05-05 00:28:45.727834 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-05 00:28:45.728291 | orchestrator | Monday 05 May 2025 00:28:45 +0000 (0:00:00.422) 0:00:12.625 ************ 2025-05-05 00:28:45.804499 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:45.835201 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:28:45.855440 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:28:45.882613 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:28:45.944250 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:28:45.945170 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:28:45.945816 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:28:45.946681 | orchestrator | 2025-05-05 00:28:45.947408 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-05 00:28:45.948120 | orchestrator | Monday 05 May 2025 00:28:45 +0000 (0:00:00.223) 0:00:12.848 ************ 2025-05-05 00:28:46.215346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:28:46.215928 | orchestrator | 2025-05-05 00:28:46.218921 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-05 00:28:46.497722 | orchestrator | Monday 05 May 2025 00:28:46 +0000 (0:00:00.269) 0:00:13.118 ************ 2025-05-05 00:28:46.497872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:28:46.498391 | orchestrator | 2025-05-05 00:28:46.498438 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-05 00:28:46.498642 | orchestrator | Monday 05 May 2025 00:28:46 +0000 (0:00:00.282) 0:00:13.400 ************ 2025-05-05 00:28:47.806779 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:47.807010 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:47.808779 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:47.809265 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:47.809302 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:47.809919 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:47.810911 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:47.811555 | orchestrator | 2025-05-05 00:28:47.812821 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-05 00:28:47.813611 | orchestrator | Monday 05 May 2025 00:28:47 +0000 (0:00:01.308) 0:00:14.709 ************ 2025-05-05 00:28:47.855410 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:47.906848 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:28:47.932845 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:28:47.960963 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:28:48.019256 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:28:48.019996 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:28:48.020895 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:28:48.021908 | orchestrator | 2025-05-05 00:28:48.022685 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-05 00:28:48.023754 | orchestrator | Monday 05 May 2025 00:28:48 +0000 (0:00:00.213) 0:00:14.922 ************ 2025-05-05 00:28:48.531098 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:48.531768 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:48.531815 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:48.534427 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:48.535031 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:48.535881 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:48.536307 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:48.537050 | orchestrator | 2025-05-05 00:28:48.537804 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-05 00:28:48.538350 | orchestrator | Monday 05 May 2025 00:28:48 +0000 (0:00:00.510) 0:00:15.433 ************ 2025-05-05 00:28:48.639061 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:48.661843 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:28:48.691560 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:28:48.772340 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:28:48.772701 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:28:48.773351 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:28:48.776865 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:28:48.777235 | orchestrator | 2025-05-05 00:28:48.777805 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-05 00:28:48.781706 | orchestrator | Monday 05 May 2025 00:28:48 +0000 (0:00:00.242) 0:00:15.675 ************ 2025-05-05 00:28:49.337611 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:49.338368 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:49.341725 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:49.342737 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:49.343349 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:49.344048 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:49.344568 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:49.345250 | orchestrator | 2025-05-05 00:28:49.345715 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-05 00:28:49.346268 | orchestrator | Monday 05 May 2025 00:28:49 +0000 (0:00:00.565) 0:00:16.241 ************ 2025-05-05 00:28:50.352106 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:50.352895 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:50.353308 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:50.353900 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:50.354888 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:50.355039 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:50.355713 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:50.356241 | orchestrator | 2025-05-05 00:28:50.357004 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-05 00:28:50.357638 | orchestrator | Monday 05 May 2025 00:28:50 +0000 (0:00:01.013) 0:00:17.254 ************ 2025-05-05 00:28:51.471074 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:51.471375 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:51.471415 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:51.471628 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:51.472689 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:51.472784 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:51.473957 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:51.474089 | orchestrator | 2025-05-05 00:28:51.474751 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-05 00:28:51.475397 | orchestrator | Monday 05 May 2025 00:28:51 +0000 (0:00:01.118) 0:00:18.372 ************ 2025-05-05 00:28:51.767505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:28:51.767743 | orchestrator | 2025-05-05 00:28:51.768921 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-05 00:28:51.771735 | orchestrator | Monday 05 May 2025 00:28:51 +0000 (0:00:00.297) 0:00:18.670 ************ 2025-05-05 00:28:51.843231 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:53.191487 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:53.191864 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:28:53.192426 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:28:53.192484 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:53.192538 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:53.192725 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:28:53.194070 | orchestrator | 2025-05-05 00:28:53.194283 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-05 00:28:53.195287 | orchestrator | Monday 05 May 2025 00:28:53 +0000 (0:00:01.422) 0:00:20.093 ************ 2025-05-05 00:28:53.264589 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:53.298189 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:53.323684 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:53.351006 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:53.410257 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:53.410805 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:53.414973 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:53.511639 | orchestrator | 2025-05-05 00:28:53.511785 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-05 00:28:53.511808 | orchestrator | Monday 05 May 2025 00:28:53 +0000 (0:00:00.220) 0:00:20.313 ************ 2025-05-05 00:28:53.511840 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:53.544551 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:53.575579 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:53.645107 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:53.645730 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:53.646704 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:53.649350 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:53.717333 | orchestrator | 2025-05-05 00:28:53.717456 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-05 00:28:53.717476 | orchestrator | Monday 05 May 2025 00:28:53 +0000 (0:00:00.235) 0:00:20.549 ************ 2025-05-05 00:28:53.717507 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:53.736574 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:53.762171 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:53.785810 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:53.844325 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:53.848093 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:54.101320 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:54.101443 | orchestrator | 2025-05-05 00:28:54.101464 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-05 00:28:54.101480 | orchestrator | Monday 05 May 2025 00:28:53 +0000 (0:00:00.199) 0:00:20.748 ************ 2025-05-05 00:28:54.101513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:28:54.101674 | orchestrator | 2025-05-05 00:28:54.101700 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-05 00:28:54.101722 | orchestrator | Monday 05 May 2025 00:28:54 +0000 (0:00:00.256) 0:00:21.005 ************ 2025-05-05 00:28:54.629765 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:54.630737 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:54.631566 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:54.632307 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:54.633855 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:54.634220 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:54.634257 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:54.634845 | orchestrator | 2025-05-05 00:28:54.635333 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-05 00:28:54.636118 | orchestrator | Monday 05 May 2025 00:28:54 +0000 (0:00:00.526) 0:00:21.532 ************ 2025-05-05 00:28:54.706577 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:28:54.728125 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:28:54.752088 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:28:54.775123 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:28:54.846683 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:28:54.847195 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:28:54.847645 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:28:54.848820 | orchestrator | 2025-05-05 00:28:54.849621 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-05 00:28:54.850351 | orchestrator | Monday 05 May 2025 00:28:54 +0000 (0:00:00.217) 0:00:21.749 ************ 2025-05-05 00:28:55.942548 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:55.942762 | orchestrator | changed: [testbed-manager] 2025-05-05 00:28:55.943428 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:55.943497 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:55.943577 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:55.944760 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:55.945530 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:55.945611 | orchestrator | 2025-05-05 00:28:55.947072 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-05 00:28:55.947850 | orchestrator | Monday 05 May 2025 00:28:55 +0000 (0:00:01.094) 0:00:22.844 ************ 2025-05-05 00:28:56.495742 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:56.496794 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:56.497001 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:56.497909 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:56.498545 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:28:56.499397 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:28:56.499981 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:28:56.501052 | orchestrator | 2025-05-05 00:28:56.502109 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-05 00:28:56.502695 | orchestrator | Monday 05 May 2025 00:28:56 +0000 (0:00:00.552) 0:00:23.397 ************ 2025-05-05 00:28:57.629657 | orchestrator | ok: [testbed-manager] 2025-05-05 00:28:57.629999 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:28:57.630112 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:28:57.630819 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:28:57.632850 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:28:57.633212 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:28:57.634935 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:28:57.635352 | orchestrator | 2025-05-05 00:28:57.635410 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-05 00:28:57.636030 | orchestrator | Monday 05 May 2025 00:28:57 +0000 (0:00:01.132) 0:00:24.529 ************ 2025-05-05 00:29:11.152296 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:11.152597 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:11.152647 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:11.152683 | orchestrator | changed: [testbed-manager] 2025-05-05 00:29:11.153056 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:29:11.154439 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:29:11.154732 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:29:11.157259 | orchestrator | 2025-05-05 00:29:11.157935 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-05 00:29:11.157982 | orchestrator | Monday 05 May 2025 00:29:11 +0000 (0:00:13.521) 0:00:38.051 ************ 2025-05-05 00:29:11.238530 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:11.279959 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:11.309584 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:11.336696 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:11.390527 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:11.390655 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:11.391689 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:11.392460 | orchestrator | 2025-05-05 00:29:11.393249 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-05 00:29:11.394467 | orchestrator | Monday 05 May 2025 00:29:11 +0000 (0:00:00.243) 0:00:38.294 ************ 2025-05-05 00:29:11.461940 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:11.492252 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:11.517391 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:11.544890 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:11.611813 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:11.612510 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:11.614094 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:11.614221 | orchestrator | 2025-05-05 00:29:11.614999 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-05 00:29:11.615426 | orchestrator | Monday 05 May 2025 00:29:11 +0000 (0:00:00.221) 0:00:38.515 ************ 2025-05-05 00:29:11.685009 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:11.714323 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:11.739048 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:11.767995 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:11.847231 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:11.847648 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:11.848199 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:11.848792 | orchestrator | 2025-05-05 00:29:11.849837 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-05 00:29:11.850365 | orchestrator | Monday 05 May 2025 00:29:11 +0000 (0:00:00.234) 0:00:38.750 ************ 2025-05-05 00:29:12.158199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:29:12.158689 | orchestrator | 2025-05-05 00:29:12.159596 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-05 00:29:12.160439 | orchestrator | Monday 05 May 2025 00:29:12 +0000 (0:00:00.310) 0:00:39.061 ************ 2025-05-05 00:29:13.820176 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:13.820301 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:13.820781 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:13.820800 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:13.823738 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:15.095654 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:15.095787 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:15.095809 | orchestrator | 2025-05-05 00:29:15.095827 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-05 00:29:15.095843 | orchestrator | Monday 05 May 2025 00:29:13 +0000 (0:00:01.660) 0:00:40.721 ************ 2025-05-05 00:29:15.095875 | orchestrator | changed: [testbed-manager] 2025-05-05 00:29:15.096080 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:29:15.096105 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:29:15.096180 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:29:15.096203 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:29:15.097715 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:29:15.099891 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:29:15.100761 | orchestrator | 2025-05-05 00:29:15.101046 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-05 00:29:15.102157 | orchestrator | Monday 05 May 2025 00:29:15 +0000 (0:00:01.272) 0:00:41.994 ************ 2025-05-05 00:29:15.905719 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:15.906344 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:15.906388 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:15.907500 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:15.908352 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:15.909083 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:15.909905 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:15.912434 | orchestrator | 2025-05-05 00:29:15.913097 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-05 00:29:15.913698 | orchestrator | Monday 05 May 2025 00:29:15 +0000 (0:00:00.812) 0:00:42.806 ************ 2025-05-05 00:29:16.193683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:29:16.193897 | orchestrator | 2025-05-05 00:29:16.194323 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-05 00:29:16.194954 | orchestrator | Monday 05 May 2025 00:29:16 +0000 (0:00:00.290) 0:00:43.097 ************ 2025-05-05 00:29:17.196794 | orchestrator | changed: [testbed-manager] 2025-05-05 00:29:17.197054 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:29:17.198077 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:29:17.200332 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:29:17.201899 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:29:17.203023 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:29:17.203633 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:29:17.204352 | orchestrator | 2025-05-05 00:29:17.204758 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-05 00:29:17.205586 | orchestrator | Monday 05 May 2025 00:29:17 +0000 (0:00:01.001) 0:00:44.098 ************ 2025-05-05 00:29:17.287409 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:29:17.311171 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:29:17.339752 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:29:17.494711 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:29:17.495877 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:29:17.497175 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:29:17.498307 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:29:17.499678 | orchestrator | 2025-05-05 00:29:17.500649 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-05 00:29:17.501782 | orchestrator | Monday 05 May 2025 00:29:17 +0000 (0:00:00.299) 0:00:44.397 ************ 2025-05-05 00:29:28.903222 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:29:28.903491 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:29:28.903520 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:29:28.903542 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:29:28.904706 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:29:28.905848 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:29:28.906569 | orchestrator | changed: [testbed-manager] 2025-05-05 00:29:28.907368 | orchestrator | 2025-05-05 00:29:28.909808 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-05 00:29:29.650998 | orchestrator | Monday 05 May 2025 00:29:28 +0000 (0:00:11.404) 0:00:55.802 ************ 2025-05-05 00:29:29.651247 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:29.652699 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:29.653645 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:29.655034 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:29.655758 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:29.656778 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:29.657784 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:29.658573 | orchestrator | 2025-05-05 00:29:29.659340 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-05 00:29:29.660256 | orchestrator | Monday 05 May 2025 00:29:29 +0000 (0:00:00.752) 0:00:56.554 ************ 2025-05-05 00:29:30.516228 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:30.517259 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:30.518196 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:30.519139 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:30.520225 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:30.520832 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:30.522571 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:30.523634 | orchestrator | 2025-05-05 00:29:30.523664 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-05 00:29:30.523687 | orchestrator | Monday 05 May 2025 00:29:30 +0000 (0:00:00.864) 0:00:57.419 ************ 2025-05-05 00:29:30.587280 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:30.611304 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:30.637222 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:30.657201 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:30.707924 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:30.708081 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:30.708799 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:30.710222 | orchestrator | 2025-05-05 00:29:30.710870 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-05 00:29:30.711717 | orchestrator | Monday 05 May 2025 00:29:30 +0000 (0:00:00.193) 0:00:57.612 ************ 2025-05-05 00:29:30.773169 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:30.805001 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:30.824048 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:30.846290 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:30.916698 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:30.918212 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:30.919016 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:30.919623 | orchestrator | 2025-05-05 00:29:30.920099 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-05 00:29:30.920445 | orchestrator | Monday 05 May 2025 00:29:30 +0000 (0:00:00.208) 0:00:57.821 ************ 2025-05-05 00:29:31.187159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:29:31.187361 | orchestrator | 2025-05-05 00:29:31.188158 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-05 00:29:31.189300 | orchestrator | Monday 05 May 2025 00:29:31 +0000 (0:00:00.267) 0:00:58.088 ************ 2025-05-05 00:29:32.660192 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:32.660485 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:32.660514 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:32.660551 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:32.660568 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:32.660584 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:32.660605 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:32.661223 | orchestrator | 2025-05-05 00:29:32.661261 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-05 00:29:32.661673 | orchestrator | Monday 05 May 2025 00:29:32 +0000 (0:00:01.467) 0:00:59.556 ************ 2025-05-05 00:29:33.257216 | orchestrator | changed: [testbed-manager] 2025-05-05 00:29:33.258346 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:29:33.259568 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:29:33.261209 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:29:33.261256 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:29:33.261640 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:29:33.261669 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:29:33.262374 | orchestrator | 2025-05-05 00:29:33.262423 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-05 00:29:33.263837 | orchestrator | Monday 05 May 2025 00:29:33 +0000 (0:00:00.602) 0:01:00.159 ************ 2025-05-05 00:29:33.339084 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:33.369552 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:33.395072 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:33.423224 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:33.493472 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:33.493675 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:33.494484 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:33.494782 | orchestrator | 2025-05-05 00:29:33.495312 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-05 00:29:33.496056 | orchestrator | Monday 05 May 2025 00:29:33 +0000 (0:00:00.237) 0:01:00.396 ************ 2025-05-05 00:29:34.551423 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:34.552427 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:34.553333 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:34.553762 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:34.554908 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:34.555755 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:34.556007 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:34.558477 | orchestrator | 2025-05-05 00:29:34.558835 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-05 00:29:34.559399 | orchestrator | Monday 05 May 2025 00:29:34 +0000 (0:00:01.056) 0:01:01.453 ************ 2025-05-05 00:29:36.073917 | orchestrator | changed: [testbed-manager] 2025-05-05 00:29:36.074271 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:29:36.075697 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:29:36.076178 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:29:36.078298 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:29:36.078913 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:29:36.079756 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:29:36.081233 | orchestrator | 2025-05-05 00:29:36.082480 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-05 00:29:36.083443 | orchestrator | Monday 05 May 2025 00:29:36 +0000 (0:00:01.522) 0:01:02.975 ************ 2025-05-05 00:29:38.400316 | orchestrator | ok: [testbed-manager] 2025-05-05 00:29:38.400817 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:29:38.401395 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:29:38.404125 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:29:38.405318 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:29:38.406123 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:29:38.407170 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:29:38.407909 | orchestrator | 2025-05-05 00:29:38.408816 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-05 00:29:38.409570 | orchestrator | Monday 05 May 2025 00:29:38 +0000 (0:00:02.325) 0:01:05.301 ************ 2025-05-05 00:30:16.335869 | orchestrator | ok: [testbed-manager] 2025-05-05 00:30:16.336802 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:30:16.336838 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:30:16.336853 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:30:16.336867 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:30:16.336881 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:30:16.336903 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:30:16.337255 | orchestrator | 2025-05-05 00:30:16.338849 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-05 00:30:16.339905 | orchestrator | Monday 05 May 2025 00:30:16 +0000 (0:00:37.932) 0:01:43.233 ************ 2025-05-05 00:31:37.719983 | orchestrator | changed: [testbed-manager] 2025-05-05 00:31:39.243553 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:31:39.243710 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:31:39.243730 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:31:39.243772 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:31:39.243787 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:31:39.243801 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:31:39.243816 | orchestrator | 2025-05-05 00:31:39.243832 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-05 00:31:39.243848 | orchestrator | Monday 05 May 2025 00:31:37 +0000 (0:01:21.379) 0:03:04.613 ************ 2025-05-05 00:31:39.243882 | orchestrator | ok: [testbed-manager] 2025-05-05 00:31:39.245272 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:31:39.245908 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:31:39.245939 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:31:39.246652 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:31:39.247445 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:31:39.247940 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:31:39.248345 | orchestrator | 2025-05-05 00:31:39.248759 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-05 00:31:39.249088 | orchestrator | Monday 05 May 2025 00:31:39 +0000 (0:00:01.532) 0:03:06.146 ************ 2025-05-05 00:31:50.654373 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:31:50.654958 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:31:50.655126 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:31:50.655159 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:31:50.655181 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:31:50.655444 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:31:50.656121 | orchestrator | changed: [testbed-manager] 2025-05-05 00:31:50.656792 | orchestrator | 2025-05-05 00:31:50.657306 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-05 00:31:50.657940 | orchestrator | Monday 05 May 2025 00:31:50 +0000 (0:00:11.406) 0:03:17.552 ************ 2025-05-05 00:31:50.991782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-05 00:31:50.992186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-05 00:31:50.996068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-05 00:31:50.996706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-05 00:31:50.996742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-05 00:31:50.996760 | orchestrator | 2025-05-05 00:31:50.996785 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-05 00:31:50.997964 | orchestrator | Monday 05 May 2025 00:31:50 +0000 (0:00:00.342) 0:03:17.894 ************ 2025-05-05 00:31:51.029020 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-05 00:31:51.059381 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-05 00:31:51.090333 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:31:51.090477 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-05 00:31:51.119073 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:31:51.149268 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-05 00:31:51.150106 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:31:51.177891 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:31:51.661160 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-05 00:31:51.661330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-05 00:31:51.661360 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-05 00:31:51.661662 | orchestrator | 2025-05-05 00:31:51.662143 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-05 00:31:51.662337 | orchestrator | Monday 05 May 2025 00:31:51 +0000 (0:00:00.670) 0:03:18.565 ************ 2025-05-05 00:31:51.724701 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-05 00:31:51.727910 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-05 00:31:51.728028 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-05 00:31:51.728425 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-05 00:31:51.729712 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-05 00:31:51.732459 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-05 00:31:51.732513 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-05 00:31:51.732540 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-05 00:31:51.734524 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-05 00:31:51.735200 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-05 00:31:51.762623 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:31:51.762774 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-05 00:31:51.765422 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-05 00:31:51.765577 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-05 00:31:51.765606 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-05 00:31:51.765864 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-05 00:31:51.766234 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-05 00:31:51.766342 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-05 00:31:51.766968 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-05 00:31:51.767091 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-05 00:31:51.769913 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-05 00:31:51.770231 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-05 00:31:51.807186 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-05 00:31:51.807381 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-05 00:31:51.807448 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-05 00:31:51.807898 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-05 00:31:51.808129 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-05 00:31:51.808399 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-05 00:31:51.808707 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-05 00:31:51.808964 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-05 00:31:51.809241 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-05 00:31:51.810939 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-05 00:31:51.813071 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-05 00:31:51.813294 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-05 00:31:51.813637 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-05 00:31:51.838347 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:31:51.838530 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-05 00:31:51.838628 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-05 00:31:51.838880 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-05 00:31:51.839096 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-05 00:31:51.868942 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-05 00:31:51.869323 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:31:51.869368 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-05 00:31:55.391539 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:31:55.392378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-05 00:31:55.392974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-05 00:31:55.396011 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-05 00:31:55.397148 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-05 00:31:55.397892 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-05 00:31:55.398922 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-05 00:31:55.399393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-05 00:31:55.400236 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-05 00:31:55.400994 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-05 00:31:55.402402 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-05 00:31:55.403794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-05 00:31:55.404735 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-05 00:31:55.405553 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-05 00:31:55.406264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-05 00:31:55.407502 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-05 00:31:55.408165 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-05 00:31:55.408848 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-05 00:31:55.409301 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-05 00:31:55.410137 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-05 00:31:55.410698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-05 00:31:55.411226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-05 00:31:55.411817 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-05 00:31:55.412437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-05 00:31:55.413028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-05 00:31:55.413677 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-05 00:31:55.414399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-05 00:31:55.414673 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-05 00:31:55.415117 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-05 00:31:55.415568 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-05 00:31:55.415936 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-05 00:31:55.416393 | orchestrator | 2025-05-05 00:31:55.416832 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-05 00:31:55.417180 | orchestrator | Monday 05 May 2025 00:31:55 +0000 (0:00:03.727) 0:03:22.293 ************ 2025-05-05 00:31:55.946590 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-05 00:31:55.946774 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-05 00:31:55.947709 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-05 00:31:55.947926 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-05 00:31:55.948628 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-05 00:31:55.949314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-05 00:31:55.949742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-05 00:31:55.950526 | orchestrator | 2025-05-05 00:31:55.950691 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-05 00:31:55.951306 | orchestrator | Monday 05 May 2025 00:31:55 +0000 (0:00:00.554) 0:03:22.847 ************ 2025-05-05 00:31:56.000508 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-05 00:31:56.024525 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:31:56.104467 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-05 00:31:56.431468 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-05 00:31:56.433930 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:31:56.433993 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:31:56.434719 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-05 00:31:56.434753 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:31:56.434779 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-05 00:31:56.435385 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-05 00:31:56.436340 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-05 00:31:56.436875 | orchestrator | 2025-05-05 00:31:56.437352 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-05 00:31:56.437790 | orchestrator | Monday 05 May 2025 00:31:56 +0000 (0:00:00.485) 0:03:23.333 ************ 2025-05-05 00:31:56.492724 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-05 00:31:56.519439 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:31:56.597965 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-05 00:31:56.598288 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-05 00:31:56.986322 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:31:56.987587 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:31:56.987644 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-05 00:31:56.990874 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:31:56.991105 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-05 00:31:56.991140 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-05 00:31:56.991155 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-05 00:31:56.991179 | orchestrator | 2025-05-05 00:31:56.991696 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-05 00:31:56.992810 | orchestrator | Monday 05 May 2025 00:31:56 +0000 (0:00:00.555) 0:03:23.889 ************ 2025-05-05 00:31:57.066144 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:31:57.089753 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:31:57.115556 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:31:57.141710 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:31:57.275470 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:31:57.276608 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:31:57.280839 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:32:02.966695 | orchestrator | 2025-05-05 00:32:02.966867 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-05 00:32:02.966939 | orchestrator | Monday 05 May 2025 00:31:57 +0000 (0:00:00.289) 0:03:24.179 ************ 2025-05-05 00:32:02.966975 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:02.967122 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:02.967146 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:02.967160 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:02.967175 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:02.967195 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:02.967256 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:02.967562 | orchestrator | 2025-05-05 00:32:02.968205 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-05 00:32:02.968819 | orchestrator | Monday 05 May 2025 00:32:02 +0000 (0:00:05.690) 0:03:29.869 ************ 2025-05-05 00:32:03.035421 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-05 00:32:03.074870 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:32:03.075106 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-05 00:32:03.134744 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:32:03.136876 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-05 00:32:03.170304 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-05 00:32:03.170393 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:32:03.209953 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-05 00:32:03.278616 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:32:03.278823 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-05 00:32:03.278922 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:32:03.279672 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:32:03.280226 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-05 00:32:03.280831 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:32:03.281539 | orchestrator | 2025-05-05 00:32:03.282002 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-05 00:32:03.282467 | orchestrator | Monday 05 May 2025 00:32:03 +0000 (0:00:00.313) 0:03:30.183 ************ 2025-05-05 00:32:04.277773 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-05 00:32:04.281372 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-05 00:32:04.282413 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-05 00:32:04.283153 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-05 00:32:04.285016 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-05 00:32:04.285179 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-05 00:32:04.288885 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-05 00:32:04.289457 | orchestrator | 2025-05-05 00:32:04.290298 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-05 00:32:04.290685 | orchestrator | Monday 05 May 2025 00:32:04 +0000 (0:00:00.995) 0:03:31.178 ************ 2025-05-05 00:32:04.774318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:32:04.774465 | orchestrator | 2025-05-05 00:32:04.774492 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-05 00:32:04.774760 | orchestrator | Monday 05 May 2025 00:32:04 +0000 (0:00:00.492) 0:03:31.671 ************ 2025-05-05 00:32:05.969585 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:05.970787 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:05.970831 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:05.970847 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:05.970863 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:05.970878 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:05.970893 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:05.970907 | orchestrator | 2025-05-05 00:32:05.970932 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-05 00:32:05.972759 | orchestrator | Monday 05 May 2025 00:32:05 +0000 (0:00:01.195) 0:03:32.867 ************ 2025-05-05 00:32:06.556548 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:06.559013 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:06.560144 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:06.560177 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:06.560192 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:06.560213 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:06.560711 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:06.561156 | orchestrator | 2025-05-05 00:32:06.561529 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-05 00:32:06.562111 | orchestrator | Monday 05 May 2025 00:32:06 +0000 (0:00:00.591) 0:03:33.458 ************ 2025-05-05 00:32:07.146439 | orchestrator | changed: [testbed-manager] 2025-05-05 00:32:07.147100 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:07.148830 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:07.149321 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:07.149355 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:07.150172 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:07.150572 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:07.151336 | orchestrator | 2025-05-05 00:32:07.151764 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-05 00:32:07.152362 | orchestrator | Monday 05 May 2025 00:32:07 +0000 (0:00:00.591) 0:03:34.049 ************ 2025-05-05 00:32:07.701751 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:07.703233 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:07.703387 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:07.705964 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:07.706294 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:07.707091 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:07.708370 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:07.708929 | orchestrator | 2025-05-05 00:32:07.709908 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-05 00:32:07.710572 | orchestrator | Monday 05 May 2025 00:32:07 +0000 (0:00:00.554) 0:03:34.604 ************ 2025-05-05 00:32:08.663185 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746403464.5202677, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.663438 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746403484.0198483, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.663478 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746403470.5695207, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.664068 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746403474.525674, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.664418 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746403472.0591853, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.665156 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746403483.6778517, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.667087 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746403487.9422758, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.668156 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746403486.0215888, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.668201 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746403399.8406444, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.669189 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746403408.5696533, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.669624 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746403403.0051637, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.670156 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746403398.4396703, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.670758 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746403405.4491627, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.671261 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746403405.743305, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 00:32:08.671654 | orchestrator | 2025-05-05 00:32:08.671982 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-05 00:32:08.672822 | orchestrator | Monday 05 May 2025 00:32:08 +0000 (0:00:00.960) 0:03:35.564 ************ 2025-05-05 00:32:09.792377 | orchestrator | changed: [testbed-manager] 2025-05-05 00:32:09.792867 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:09.794224 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:09.794748 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:09.795208 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:09.796301 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:09.797129 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:09.798010 | orchestrator | 2025-05-05 00:32:09.798526 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-05 00:32:09.799028 | orchestrator | Monday 05 May 2025 00:32:09 +0000 (0:00:01.129) 0:03:36.694 ************ 2025-05-05 00:32:10.962686 | orchestrator | changed: [testbed-manager] 2025-05-05 00:32:10.965441 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:10.966182 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:10.969507 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:10.970333 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:10.970669 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:10.971644 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:10.972424 | orchestrator | 2025-05-05 00:32:10.973071 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-05 00:32:10.975544 | orchestrator | Monday 05 May 2025 00:32:10 +0000 (0:00:01.169) 0:03:37.863 ************ 2025-05-05 00:32:11.067278 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:32:11.097856 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:32:11.127913 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:32:11.158402 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:32:11.224550 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:32:11.225439 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:32:11.226270 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:32:11.226916 | orchestrator | 2025-05-05 00:32:11.227435 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-05 00:32:11.228207 | orchestrator | Monday 05 May 2025 00:32:11 +0000 (0:00:00.265) 0:03:38.129 ************ 2025-05-05 00:32:11.953328 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:11.954251 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:11.955235 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:11.958013 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:11.958475 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:11.958505 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:11.958526 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:11.959576 | orchestrator | 2025-05-05 00:32:11.960244 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-05 00:32:11.960749 | orchestrator | Monday 05 May 2025 00:32:11 +0000 (0:00:00.726) 0:03:38.855 ************ 2025-05-05 00:32:12.322358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:32:12.322520 | orchestrator | 2025-05-05 00:32:12.322549 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-05 00:32:12.323029 | orchestrator | Monday 05 May 2025 00:32:12 +0000 (0:00:00.368) 0:03:39.224 ************ 2025-05-05 00:32:20.331394 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:20.331603 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:20.333120 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:20.333369 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:20.334263 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:20.335729 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:20.336117 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:20.337929 | orchestrator | 2025-05-05 00:32:20.338887 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-05 00:32:20.339738 | orchestrator | Monday 05 May 2025 00:32:20 +0000 (0:00:08.008) 0:03:47.233 ************ 2025-05-05 00:32:21.478124 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:21.478316 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:21.478356 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:21.478964 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:21.479255 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:21.480300 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:21.481022 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:21.481772 | orchestrator | 2025-05-05 00:32:21.482838 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-05 00:32:21.483272 | orchestrator | Monday 05 May 2025 00:32:21 +0000 (0:00:01.147) 0:03:48.380 ************ 2025-05-05 00:32:22.474837 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:22.475200 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:22.478875 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:22.479472 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:22.479511 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:22.479527 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:22.479549 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:22.480155 | orchestrator | 2025-05-05 00:32:22.481075 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-05 00:32:22.481537 | orchestrator | Monday 05 May 2025 00:32:22 +0000 (0:00:00.996) 0:03:49.377 ************ 2025-05-05 00:32:22.842508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:32:22.842714 | orchestrator | 2025-05-05 00:32:22.844773 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-05 00:32:30.678707 | orchestrator | Monday 05 May 2025 00:32:22 +0000 (0:00:00.368) 0:03:49.745 ************ 2025-05-05 00:32:30.678893 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:30.679828 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:30.680978 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:30.682580 | orchestrator | changed: [testbed-manager] 2025-05-05 00:32:30.683294 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:30.684312 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:30.684930 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:30.685437 | orchestrator | 2025-05-05 00:32:30.686144 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-05 00:32:30.686706 | orchestrator | Monday 05 May 2025 00:32:30 +0000 (0:00:07.834) 0:03:57.580 ************ 2025-05-05 00:32:31.245442 | orchestrator | changed: [testbed-manager] 2025-05-05 00:32:31.245796 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:31.246403 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:31.246900 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:31.247580 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:31.248107 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:31.248978 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:31.250138 | orchestrator | 2025-05-05 00:32:31.250724 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-05 00:32:31.251792 | orchestrator | Monday 05 May 2025 00:32:31 +0000 (0:00:00.566) 0:03:58.147 ************ 2025-05-05 00:32:32.330990 | orchestrator | changed: [testbed-manager] 2025-05-05 00:32:32.331237 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:32.332514 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:32.332710 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:32.333251 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:32.333989 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:32.334443 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:32.335222 | orchestrator | 2025-05-05 00:32:32.335525 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-05 00:32:32.336076 | orchestrator | Monday 05 May 2025 00:32:32 +0000 (0:00:01.086) 0:03:59.233 ************ 2025-05-05 00:32:33.351284 | orchestrator | changed: [testbed-manager] 2025-05-05 00:32:33.351515 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:32:33.353919 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:32:33.354161 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:32:33.354191 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:32:33.354212 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:32:33.355000 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:32:33.355519 | orchestrator | 2025-05-05 00:32:33.356217 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-05 00:32:33.356845 | orchestrator | Monday 05 May 2025 00:32:33 +0000 (0:00:01.019) 0:04:00.253 ************ 2025-05-05 00:32:33.431128 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:33.463544 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:33.531246 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:33.561119 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:33.630293 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:33.630472 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:33.631221 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:33.631859 | orchestrator | 2025-05-05 00:32:33.632382 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-05 00:32:33.633001 | orchestrator | Monday 05 May 2025 00:32:33 +0000 (0:00:00.281) 0:04:00.534 ************ 2025-05-05 00:32:33.736090 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:33.765766 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:33.799337 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:33.843779 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:33.908154 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:33.908297 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:33.908310 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:33.908973 | orchestrator | 2025-05-05 00:32:33.909294 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-05 00:32:33.909960 | orchestrator | Monday 05 May 2025 00:32:33 +0000 (0:00:00.277) 0:04:00.811 ************ 2025-05-05 00:32:34.011324 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:34.057251 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:34.094352 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:34.129105 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:34.204193 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:34.205239 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:34.206625 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:34.208185 | orchestrator | 2025-05-05 00:32:34.208999 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-05 00:32:34.210653 | orchestrator | Monday 05 May 2025 00:32:34 +0000 (0:00:00.295) 0:04:01.107 ************ 2025-05-05 00:32:39.987316 | orchestrator | ok: [testbed-manager] 2025-05-05 00:32:39.987598 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:32:39.987658 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:32:39.988744 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:32:39.989342 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:32:39.989419 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:32:39.990429 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:32:39.991134 | orchestrator | 2025-05-05 00:32:39.991773 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-05 00:32:39.992361 | orchestrator | Monday 05 May 2025 00:32:39 +0000 (0:00:05.781) 0:04:06.889 ************ 2025-05-05 00:32:40.344347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:32:40.344643 | orchestrator | 2025-05-05 00:32:40.344705 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-05 00:32:40.347532 | orchestrator | Monday 05 May 2025 00:32:40 +0000 (0:00:00.356) 0:04:07.246 ************ 2025-05-05 00:32:40.419393 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-05 00:32:40.454106 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-05 00:32:40.454292 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:32:40.456986 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-05 00:32:40.457238 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-05 00:32:40.491936 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-05 00:32:40.492073 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:32:40.493191 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-05 00:32:40.494267 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-05 00:32:40.529312 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:32:40.530264 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-05 00:32:40.565548 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:32:40.566194 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-05 00:32:40.566880 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-05 00:32:40.629897 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:32:40.632666 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-05 00:32:40.632820 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-05 00:32:40.634209 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:32:40.635165 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-05 00:32:40.636398 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-05 00:32:40.637740 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:32:40.638099 | orchestrator | 2025-05-05 00:32:40.638887 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-05 00:32:40.639990 | orchestrator | Monday 05 May 2025 00:32:40 +0000 (0:00:00.287) 0:04:07.533 ************ 2025-05-05 00:32:41.014709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:32:41.017801 | orchestrator | 2025-05-05 00:32:41.084181 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-05 00:32:41.084302 | orchestrator | Monday 05 May 2025 00:32:41 +0000 (0:00:00.383) 0:04:07.917 ************ 2025-05-05 00:32:41.084323 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-05 00:32:41.121356 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-05 00:32:41.159727 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:32:41.159886 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:32:41.159938 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-05 00:32:41.193413 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-05 00:32:41.194108 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:32:41.195265 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-05 00:32:41.223757 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:32:41.284998 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-05 00:32:41.286110 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:32:41.286877 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:32:41.287509 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-05 00:32:41.288422 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:32:41.289166 | orchestrator | 2025-05-05 00:32:41.289946 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-05 00:32:41.290457 | orchestrator | Monday 05 May 2025 00:32:41 +0000 (0:00:00.272) 0:04:08.189 ************ 2025-05-05 00:32:41.659179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:32:41.659690 | orchestrator | 2025-05-05 00:32:41.663133 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-05 00:33:15.691848 | orchestrator | Monday 05 May 2025 00:32:41 +0000 (0:00:00.372) 0:04:08.561 ************ 2025-05-05 00:33:15.692063 | orchestrator | changed: [testbed-manager] 2025-05-05 00:33:15.692566 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:15.692598 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:15.692617 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:15.692642 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:15.692844 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:15.693949 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:15.695008 | orchestrator | 2025-05-05 00:33:15.696011 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-05 00:33:15.696171 | orchestrator | Monday 05 May 2025 00:33:15 +0000 (0:00:34.028) 0:04:42.590 ************ 2025-05-05 00:33:23.205565 | orchestrator | changed: [testbed-manager] 2025-05-05 00:33:23.206142 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:23.209770 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:23.211560 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:23.211597 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:23.212755 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:23.213413 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:23.213993 | orchestrator | 2025-05-05 00:33:23.214488 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-05 00:33:23.215167 | orchestrator | Monday 05 May 2025 00:33:23 +0000 (0:00:07.516) 0:04:50.106 ************ 2025-05-05 00:33:30.484749 | orchestrator | changed: [testbed-manager] 2025-05-05 00:33:30.484965 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:30.486112 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:30.486729 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:30.487546 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:30.489270 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:30.489754 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:30.490510 | orchestrator | 2025-05-05 00:33:30.491322 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-05 00:33:30.491685 | orchestrator | Monday 05 May 2025 00:33:30 +0000 (0:00:07.281) 0:04:57.387 ************ 2025-05-05 00:33:32.040440 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:32.040702 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:33:32.041243 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:33:32.043737 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:33:32.044243 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:33:32.044273 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:33:32.044294 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:33:32.045124 | orchestrator | 2025-05-05 00:33:32.045470 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-05 00:33:32.046161 | orchestrator | Monday 05 May 2025 00:33:32 +0000 (0:00:01.554) 0:04:58.942 ************ 2025-05-05 00:33:37.617705 | orchestrator | changed: [testbed-manager] 2025-05-05 00:33:37.619183 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:37.621028 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:37.623091 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:37.623912 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:37.624543 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:37.625117 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:37.627786 | orchestrator | 2025-05-05 00:33:37.628092 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-05 00:33:37.628983 | orchestrator | Monday 05 May 2025 00:33:37 +0000 (0:00:05.577) 0:05:04.519 ************ 2025-05-05 00:33:38.028431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:33:38.029042 | orchestrator | 2025-05-05 00:33:38.029456 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-05 00:33:38.030169 | orchestrator | Monday 05 May 2025 00:33:38 +0000 (0:00:00.411) 0:05:04.931 ************ 2025-05-05 00:33:38.713778 | orchestrator | changed: [testbed-manager] 2025-05-05 00:33:38.715826 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:38.716925 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:38.718211 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:38.718850 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:38.720928 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:38.721438 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:38.721572 | orchestrator | 2025-05-05 00:33:38.721959 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-05 00:33:38.722795 | orchestrator | Monday 05 May 2025 00:33:38 +0000 (0:00:00.684) 0:05:05.616 ************ 2025-05-05 00:33:40.219403 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:40.219584 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:33:40.220603 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:33:40.221087 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:33:40.222170 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:33:40.222896 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:33:40.223743 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:33:40.225141 | orchestrator | 2025-05-05 00:33:40.225925 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-05 00:33:40.226469 | orchestrator | Monday 05 May 2025 00:33:40 +0000 (0:00:01.505) 0:05:07.121 ************ 2025-05-05 00:33:40.998865 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:41.001056 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:41.001107 | orchestrator | changed: [testbed-manager] 2025-05-05 00:33:41.002172 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:41.002981 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:41.003331 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:41.004163 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:41.004809 | orchestrator | 2025-05-05 00:33:41.005589 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-05 00:33:41.006296 | orchestrator | Monday 05 May 2025 00:33:40 +0000 (0:00:00.778) 0:05:07.900 ************ 2025-05-05 00:33:41.096181 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:33:41.123268 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:33:41.153317 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:33:41.183705 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:33:41.246924 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:33:41.247312 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:33:41.248882 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:33:41.250930 | orchestrator | 2025-05-05 00:33:41.250996 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-05 00:33:41.251830 | orchestrator | Monday 05 May 2025 00:33:41 +0000 (0:00:00.249) 0:05:08.149 ************ 2025-05-05 00:33:41.344996 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:33:41.375474 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:33:41.407898 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:33:41.437420 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:33:41.604117 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:33:41.604310 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:33:41.605518 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:33:41.607230 | orchestrator | 2025-05-05 00:33:41.607915 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-05 00:33:41.607951 | orchestrator | Monday 05 May 2025 00:33:41 +0000 (0:00:00.355) 0:05:08.505 ************ 2025-05-05 00:33:41.715887 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:41.761641 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:33:41.794797 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:33:41.845987 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:33:41.908644 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:33:41.910613 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:33:41.912140 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:33:41.913251 | orchestrator | 2025-05-05 00:33:41.914175 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-05 00:33:41.915174 | orchestrator | Monday 05 May 2025 00:33:41 +0000 (0:00:00.307) 0:05:08.812 ************ 2025-05-05 00:33:42.026472 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:33:42.063921 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:33:42.095279 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:33:42.126297 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:33:42.185642 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:33:42.188954 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:33:42.188990 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:33:42.189069 | orchestrator | 2025-05-05 00:33:42.189095 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-05 00:33:42.271916 | orchestrator | Monday 05 May 2025 00:33:42 +0000 (0:00:00.271) 0:05:09.084 ************ 2025-05-05 00:33:42.272097 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:42.345312 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:33:42.392973 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:33:42.434296 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:33:42.507524 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:33:42.507883 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:33:42.508759 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:33:42.509404 | orchestrator | 2025-05-05 00:33:42.510533 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-05 00:33:42.511514 | orchestrator | Monday 05 May 2025 00:33:42 +0000 (0:00:00.326) 0:05:09.411 ************ 2025-05-05 00:33:42.591819 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:33:42.642353 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:33:42.678391 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:33:42.742262 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:33:42.801942 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:33:42.802426 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:33:42.803140 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:33:42.803766 | orchestrator | 2025-05-05 00:33:42.804493 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-05 00:33:42.804903 | orchestrator | Monday 05 May 2025 00:33:42 +0000 (0:00:00.295) 0:05:09.706 ************ 2025-05-05 00:33:42.898952 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:33:42.928993 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:33:42.958502 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:33:43.002893 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:33:43.140181 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:33:43.140488 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:33:43.141236 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:33:43.142704 | orchestrator | 2025-05-05 00:33:43.143090 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-05 00:33:43.143719 | orchestrator | Monday 05 May 2025 00:33:43 +0000 (0:00:00.336) 0:05:10.043 ************ 2025-05-05 00:33:43.563404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:33:43.563645 | orchestrator | 2025-05-05 00:33:43.564387 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-05 00:33:43.564835 | orchestrator | Monday 05 May 2025 00:33:43 +0000 (0:00:00.422) 0:05:10.466 ************ 2025-05-05 00:33:44.427396 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:44.428264 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:33:44.430474 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:33:44.430856 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:33:44.433220 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:33:44.433751 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:33:44.434853 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:33:44.435475 | orchestrator | 2025-05-05 00:33:44.436723 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-05 00:33:47.125613 | orchestrator | Monday 05 May 2025 00:33:44 +0000 (0:00:00.862) 0:05:11.328 ************ 2025-05-05 00:33:47.125763 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:47.126369 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:33:47.126498 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:33:47.127152 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:33:47.130892 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:33:47.131520 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:33:47.132243 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:33:47.132953 | orchestrator | 2025-05-05 00:33:47.133755 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-05 00:33:47.134228 | orchestrator | Monday 05 May 2025 00:33:47 +0000 (0:00:02.700) 0:05:14.029 ************ 2025-05-05 00:33:47.201544 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-05 00:33:47.201649 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-05 00:33:47.277634 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-05 00:33:47.278164 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-05 00:33:47.278528 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-05 00:33:47.278985 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-05 00:33:47.342899 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:33:47.343205 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-05 00:33:47.343627 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-05 00:33:47.410819 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-05 00:33:47.410968 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:33:47.411119 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-05 00:33:47.487964 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-05 00:33:47.488146 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-05 00:33:47.488185 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:33:47.488331 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-05 00:33:47.488414 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-05 00:33:47.488501 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-05 00:33:47.553735 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:33:47.553895 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-05 00:33:47.554941 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-05 00:33:47.555915 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-05 00:33:47.671156 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:33:47.671964 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:33:47.672135 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-05 00:33:47.673063 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-05 00:33:47.674107 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-05 00:33:47.674272 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:33:47.674838 | orchestrator | 2025-05-05 00:33:47.675380 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-05 00:33:47.675413 | orchestrator | Monday 05 May 2025 00:33:47 +0000 (0:00:00.544) 0:05:14.573 ************ 2025-05-05 00:33:53.952920 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:53.953171 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:53.953209 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:53.953240 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:53.953694 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:53.955734 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:53.956562 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:53.956998 | orchestrator | 2025-05-05 00:33:53.957638 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-05 00:33:53.958356 | orchestrator | Monday 05 May 2025 00:33:53 +0000 (0:00:06.280) 0:05:20.854 ************ 2025-05-05 00:33:55.018717 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:33:55.019556 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:33:55.020671 | orchestrator | ok: [testbed-manager] 2025-05-05 00:33:55.021172 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:33:55.021862 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:33:55.022979 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:33:55.023686 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:33:55.024027 | orchestrator | 2025-05-05 00:33:55.025217 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-05 00:33:55.025863 | orchestrator | Monday 05 May 2025 00:33:55 +0000 (0:00:01.066) 0:05:21.920 ************ 2025-05-05 00:34:02.630923 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:02.631409 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:02.632175 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:02.634833 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:02.635304 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:02.636201 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:02.636679 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:02.637633 | orchestrator | 2025-05-05 00:34:02.638137 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-05 00:34:02.638519 | orchestrator | Monday 05 May 2025 00:34:02 +0000 (0:00:07.610) 0:05:29.530 ************ 2025-05-05 00:34:05.899395 | orchestrator | changed: [testbed-manager] 2025-05-05 00:34:05.899609 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:05.900316 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:05.900888 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:05.902447 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:05.903640 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:05.903674 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:05.904018 | orchestrator | 2025-05-05 00:34:05.904042 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-05 00:34:05.904777 | orchestrator | Monday 05 May 2025 00:34:05 +0000 (0:00:03.271) 0:05:32.802 ************ 2025-05-05 00:34:07.329926 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:07.330504 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:07.334116 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:07.334458 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:07.334491 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:07.334507 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:07.334522 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:07.334544 | orchestrator | 2025-05-05 00:34:07.335029 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-05 00:34:07.335371 | orchestrator | Monday 05 May 2025 00:34:07 +0000 (0:00:01.429) 0:05:34.231 ************ 2025-05-05 00:34:08.663494 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:08.665371 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:08.666254 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:08.666322 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:08.667134 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:08.668210 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:08.669404 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:08.670271 | orchestrator | 2025-05-05 00:34:08.671269 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-05 00:34:08.671954 | orchestrator | Monday 05 May 2025 00:34:08 +0000 (0:00:01.331) 0:05:35.562 ************ 2025-05-05 00:34:08.927559 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:34:08.992660 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:34:09.056588 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:34:09.264103 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:34:09.265945 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:34:09.265989 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:34:09.266830 | orchestrator | changed: [testbed-manager] 2025-05-05 00:34:09.268266 | orchestrator | 2025-05-05 00:34:09.269205 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-05 00:34:09.269549 | orchestrator | Monday 05 May 2025 00:34:09 +0000 (0:00:00.601) 0:05:36.163 ************ 2025-05-05 00:34:18.814551 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:18.815131 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:18.815180 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:18.817086 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:18.818703 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:18.820616 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:18.821060 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:18.822089 | orchestrator | 2025-05-05 00:34:18.822941 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-05 00:34:18.823913 | orchestrator | Monday 05 May 2025 00:34:18 +0000 (0:00:09.552) 0:05:45.716 ************ 2025-05-05 00:34:19.671670 | orchestrator | changed: [testbed-manager] 2025-05-05 00:34:19.672289 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:19.673038 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:19.673801 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:19.675345 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:19.675840 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:19.675871 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:19.676668 | orchestrator | 2025-05-05 00:34:19.677517 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-05 00:34:19.677825 | orchestrator | Monday 05 May 2025 00:34:19 +0000 (0:00:00.856) 0:05:46.573 ************ 2025-05-05 00:34:32.310545 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:32.310767 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:32.310795 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:32.310811 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:32.310833 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:32.311451 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:32.312638 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:32.313020 | orchestrator | 2025-05-05 00:34:32.313924 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-05 00:34:32.314617 | orchestrator | Monday 05 May 2025 00:34:32 +0000 (0:00:12.633) 0:05:59.207 ************ 2025-05-05 00:34:44.826889 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:44.828078 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:44.828120 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:44.828136 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:44.828151 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:44.828200 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:44.830223 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:44.832812 | orchestrator | 2025-05-05 00:34:44.833663 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-05 00:34:45.267132 | orchestrator | Monday 05 May 2025 00:34:44 +0000 (0:00:12.516) 0:06:11.723 ************ 2025-05-05 00:34:45.267312 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-05 00:34:45.341677 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-05 00:34:45.341843 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-05 00:34:46.126547 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-05 00:34:46.127981 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-05 00:34:46.128565 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-05 00:34:46.129581 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-05 00:34:46.130649 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-05 00:34:46.133065 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-05 00:34:46.133522 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-05 00:34:46.133633 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-05 00:34:46.133667 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-05 00:34:46.134412 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-05 00:34:46.135213 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-05 00:34:46.135252 | orchestrator | 2025-05-05 00:34:46.135978 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-05 00:34:46.136441 | orchestrator | Monday 05 May 2025 00:34:46 +0000 (0:00:01.303) 0:06:13.027 ************ 2025-05-05 00:34:46.257271 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:34:46.318331 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:34:46.378467 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:34:46.442221 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:34:46.501339 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:34:46.618238 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:34:46.618770 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:34:46.620320 | orchestrator | 2025-05-05 00:34:46.620533 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-05 00:34:46.620572 | orchestrator | Monday 05 May 2025 00:34:46 +0000 (0:00:00.491) 0:06:13.519 ************ 2025-05-05 00:34:50.375900 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:50.376139 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:50.376616 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:50.378911 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:50.379318 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:50.379339 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:50.379387 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:50.379960 | orchestrator | 2025-05-05 00:34:50.380576 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-05 00:34:50.380981 | orchestrator | Monday 05 May 2025 00:34:50 +0000 (0:00:03.758) 0:06:17.277 ************ 2025-05-05 00:34:50.498617 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:34:50.718790 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:34:50.782447 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:34:50.848280 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:34:50.919820 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:34:51.014523 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:34:51.014764 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:34:51.016325 | orchestrator | 2025-05-05 00:34:51.017141 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-05 00:34:51.017953 | orchestrator | Monday 05 May 2025 00:34:51 +0000 (0:00:00.637) 0:06:17.915 ************ 2025-05-05 00:34:51.082660 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-05 00:34:51.082866 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-05 00:34:51.159303 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:34:51.161246 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-05 00:34:51.161662 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-05 00:34:51.226915 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:34:51.227578 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-05 00:34:51.228077 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-05 00:34:51.295538 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:34:51.295736 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-05 00:34:51.296414 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-05 00:34:51.370296 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:34:51.370485 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-05 00:34:51.372715 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-05 00:34:51.440291 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-05 00:34:51.440513 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-05 00:34:51.550278 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:34:51.550507 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:34:51.550937 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-05 00:34:51.552186 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-05 00:34:51.552942 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:34:51.553724 | orchestrator | 2025-05-05 00:34:51.554773 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-05 00:34:51.555026 | orchestrator | Monday 05 May 2025 00:34:51 +0000 (0:00:00.536) 0:06:18.452 ************ 2025-05-05 00:34:51.686771 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:34:51.748298 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:34:51.817764 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:34:51.879689 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:34:51.954014 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:34:52.055835 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:34:52.056101 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:34:52.056369 | orchestrator | 2025-05-05 00:34:52.057263 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-05 00:34:52.058126 | orchestrator | Monday 05 May 2025 00:34:52 +0000 (0:00:00.507) 0:06:18.959 ************ 2025-05-05 00:34:52.181713 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:34:52.245527 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:34:52.308187 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:34:52.366549 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:34:52.435108 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:34:52.519153 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:34:52.519691 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:34:52.520610 | orchestrator | 2025-05-05 00:34:52.521252 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-05 00:34:52.521670 | orchestrator | Monday 05 May 2025 00:34:52 +0000 (0:00:00.460) 0:06:19.419 ************ 2025-05-05 00:34:52.653215 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:34:52.715299 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:34:52.777045 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:34:52.845574 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:34:52.907941 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:34:53.031321 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:34:53.031579 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:34:53.033948 | orchestrator | 2025-05-05 00:34:53.034282 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-05 00:34:58.421102 | orchestrator | Monday 05 May 2025 00:34:53 +0000 (0:00:00.513) 0:06:19.933 ************ 2025-05-05 00:34:58.421275 | orchestrator | ok: [testbed-manager] 2025-05-05 00:34:58.421359 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:34:58.423160 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:34:58.423610 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:34:58.424234 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:34:58.425022 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:34:58.425456 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:34:58.426246 | orchestrator | 2025-05-05 00:34:58.426919 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-05 00:34:58.427556 | orchestrator | Monday 05 May 2025 00:34:58 +0000 (0:00:05.389) 0:06:25.323 ************ 2025-05-05 00:34:59.245742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:34:59.247631 | orchestrator | 2025-05-05 00:34:59.247672 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-05 00:34:59.247697 | orchestrator | Monday 05 May 2025 00:34:59 +0000 (0:00:00.821) 0:06:26.144 ************ 2025-05-05 00:35:00.066354 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:00.067264 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:00.070165 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:00.070972 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:00.071033 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:00.071057 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:00.071157 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:00.072196 | orchestrator | 2025-05-05 00:35:00.072885 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-05 00:35:00.073506 | orchestrator | Monday 05 May 2025 00:35:00 +0000 (0:00:00.823) 0:06:26.968 ************ 2025-05-05 00:35:00.656506 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:01.058728 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:01.058926 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:01.059161 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:01.059546 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:01.060311 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:01.062833 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:01.063182 | orchestrator | 2025-05-05 00:35:01.064748 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-05 00:35:02.397647 | orchestrator | Monday 05 May 2025 00:35:01 +0000 (0:00:00.991) 0:06:27.959 ************ 2025-05-05 00:35:02.397851 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:02.397939 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:02.401725 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:02.401825 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:02.401900 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:02.401923 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:02.402525 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:02.403321 | orchestrator | 2025-05-05 00:35:02.404707 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-05 00:35:02.405560 | orchestrator | Monday 05 May 2025 00:35:02 +0000 (0:00:01.338) 0:06:29.298 ************ 2025-05-05 00:35:02.530738 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:03.843187 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:03.843376 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:03.844776 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:03.845130 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:03.845969 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:03.846945 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:03.847285 | orchestrator | 2025-05-05 00:35:03.848509 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-05 00:35:03.849450 | orchestrator | Monday 05 May 2025 00:35:03 +0000 (0:00:01.446) 0:06:30.745 ************ 2025-05-05 00:35:05.158470 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:05.159576 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:05.160020 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:05.161226 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:05.162375 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:05.163347 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:05.164107 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:05.164753 | orchestrator | 2025-05-05 00:35:05.165443 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-05 00:35:05.165906 | orchestrator | Monday 05 May 2025 00:35:05 +0000 (0:00:01.314) 0:06:32.059 ************ 2025-05-05 00:35:06.419317 | orchestrator | changed: [testbed-manager] 2025-05-05 00:35:06.419682 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:06.420886 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:06.421915 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:06.422365 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:06.423564 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:06.424080 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:06.424493 | orchestrator | 2025-05-05 00:35:06.425450 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-05 00:35:06.425856 | orchestrator | Monday 05 May 2025 00:35:06 +0000 (0:00:01.259) 0:06:33.319 ************ 2025-05-05 00:35:07.440512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:35:07.440691 | orchestrator | 2025-05-05 00:35:07.441618 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-05 00:35:07.444963 | orchestrator | Monday 05 May 2025 00:35:07 +0000 (0:00:01.022) 0:06:34.341 ************ 2025-05-05 00:35:08.810456 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:08.812082 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:08.812305 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:08.815815 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:08.817108 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:08.817215 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:08.817259 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:08.817587 | orchestrator | 2025-05-05 00:35:08.818123 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-05 00:35:08.818798 | orchestrator | Monday 05 May 2025 00:35:08 +0000 (0:00:01.370) 0:06:35.711 ************ 2025-05-05 00:35:09.923018 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:09.923241 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:09.923276 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:09.923309 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:09.924363 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:09.925236 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:09.925897 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:09.928182 | orchestrator | 2025-05-05 00:35:09.928836 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-05 00:35:09.929341 | orchestrator | Monday 05 May 2025 00:35:09 +0000 (0:00:01.109) 0:06:36.821 ************ 2025-05-05 00:35:11.010719 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:11.011620 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:11.011656 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:11.011679 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:11.012466 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:11.013234 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:11.014150 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:11.014972 | orchestrator | 2025-05-05 00:35:11.015490 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-05 00:35:11.016273 | orchestrator | Monday 05 May 2025 00:35:11 +0000 (0:00:01.089) 0:06:37.910 ************ 2025-05-05 00:35:12.247944 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:12.248263 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:12.249360 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:12.250716 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:12.251460 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:12.252864 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:12.253336 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:12.253916 | orchestrator | 2025-05-05 00:35:12.254482 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-05 00:35:12.255202 | orchestrator | Monday 05 May 2025 00:35:12 +0000 (0:00:01.240) 0:06:39.150 ************ 2025-05-05 00:35:13.365052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:35:13.365446 | orchestrator | 2025-05-05 00:35:13.366131 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-05 00:35:13.366510 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.844) 0:06:39.995 ************ 2025-05-05 00:35:13.367413 | orchestrator | 2025-05-05 00:35:13.368133 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-05 00:35:13.370066 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.036) 0:06:40.031 ************ 2025-05-05 00:35:13.370860 | orchestrator | 2025-05-05 00:35:13.371575 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-05 00:35:13.373528 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.042) 0:06:40.074 ************ 2025-05-05 00:35:13.374011 | orchestrator | 2025-05-05 00:35:13.374921 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-05 00:35:13.375577 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.036) 0:06:40.110 ************ 2025-05-05 00:35:13.376117 | orchestrator | 2025-05-05 00:35:13.376745 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-05 00:35:13.377255 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.037) 0:06:40.147 ************ 2025-05-05 00:35:13.377941 | orchestrator | 2025-05-05 00:35:13.378641 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-05 00:35:13.379124 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.042) 0:06:40.190 ************ 2025-05-05 00:35:13.379551 | orchestrator | 2025-05-05 00:35:13.380133 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-05 00:35:13.380557 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.037) 0:06:40.227 ************ 2025-05-05 00:35:13.381235 | orchestrator | 2025-05-05 00:35:13.381627 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-05 00:35:13.381967 | orchestrator | Monday 05 May 2025 00:35:13 +0000 (0:00:00.037) 0:06:40.265 ************ 2025-05-05 00:35:14.305599 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:14.305862 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:14.306313 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:14.307089 | orchestrator | 2025-05-05 00:35:14.307890 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-05 00:35:14.309099 | orchestrator | Monday 05 May 2025 00:35:14 +0000 (0:00:00.939) 0:06:41.205 ************ 2025-05-05 00:35:15.887671 | orchestrator | changed: [testbed-manager] 2025-05-05 00:35:15.889023 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:15.889666 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:15.890813 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:15.891589 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:15.892540 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:15.893085 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:15.894099 | orchestrator | 2025-05-05 00:35:15.894629 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-05 00:35:15.895395 | orchestrator | Monday 05 May 2025 00:35:15 +0000 (0:00:01.581) 0:06:42.786 ************ 2025-05-05 00:35:16.999861 | orchestrator | changed: [testbed-manager] 2025-05-05 00:35:17.000283 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:17.000330 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:17.002288 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:17.003319 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:17.004489 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:17.005747 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:17.006999 | orchestrator | 2025-05-05 00:35:17.007219 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-05 00:35:17.007925 | orchestrator | Monday 05 May 2025 00:35:16 +0000 (0:00:01.111) 0:06:43.897 ************ 2025-05-05 00:35:17.132049 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:19.017503 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:19.017687 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:19.019152 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:19.020423 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:19.022099 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:19.023450 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:19.025072 | orchestrator | 2025-05-05 00:35:19.026191 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-05 00:35:19.027139 | orchestrator | Monday 05 May 2025 00:35:19 +0000 (0:00:02.020) 0:06:45.918 ************ 2025-05-05 00:35:19.125696 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:19.128343 | orchestrator | 2025-05-05 00:35:19.129015 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-05 00:35:19.129854 | orchestrator | Monday 05 May 2025 00:35:19 +0000 (0:00:00.111) 0:06:46.029 ************ 2025-05-05 00:35:20.091701 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:20.091936 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:20.093026 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:20.094231 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:20.094706 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:20.095214 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:20.096034 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:20.096792 | orchestrator | 2025-05-05 00:35:20.097532 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-05 00:35:20.098714 | orchestrator | Monday 05 May 2025 00:35:20 +0000 (0:00:00.961) 0:06:46.990 ************ 2025-05-05 00:35:20.235917 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:20.303743 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:20.387420 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:35:20.625413 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:35:20.688017 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:35:20.811538 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:35:20.812281 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:35:20.813803 | orchestrator | 2025-05-05 00:35:20.814889 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-05 00:35:20.815792 | orchestrator | Monday 05 May 2025 00:35:20 +0000 (0:00:00.722) 0:06:47.713 ************ 2025-05-05 00:35:21.700919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:35:21.702204 | orchestrator | 2025-05-05 00:35:21.702852 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-05 00:35:21.703701 | orchestrator | Monday 05 May 2025 00:35:21 +0000 (0:00:00.888) 0:06:48.602 ************ 2025-05-05 00:35:22.100125 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:22.537731 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:22.537903 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:22.537931 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:22.539363 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:22.540209 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:22.541394 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:22.543243 | orchestrator | 2025-05-05 00:35:22.543968 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-05 00:35:22.544683 | orchestrator | Monday 05 May 2025 00:35:22 +0000 (0:00:00.837) 0:06:49.440 ************ 2025-05-05 00:35:25.118935 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-05 00:35:25.119220 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-05 00:35:25.120873 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-05 00:35:25.122727 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-05 00:35:25.124241 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-05 00:35:25.125043 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-05 00:35:25.126097 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-05 00:35:25.127051 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-05 00:35:25.128046 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-05 00:35:25.128390 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-05 00:35:25.129308 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-05 00:35:25.129709 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-05 00:35:25.132095 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-05 00:35:25.132486 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-05 00:35:25.132955 | orchestrator | 2025-05-05 00:35:25.133528 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-05 00:35:25.134071 | orchestrator | Monday 05 May 2025 00:35:25 +0000 (0:00:02.579) 0:06:52.019 ************ 2025-05-05 00:35:25.270315 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:25.331455 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:25.403071 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:35:25.460239 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:35:25.521387 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:35:25.625566 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:35:25.625908 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:35:25.625952 | orchestrator | 2025-05-05 00:35:25.626834 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-05 00:35:25.627873 | orchestrator | Monday 05 May 2025 00:35:25 +0000 (0:00:00.507) 0:06:52.527 ************ 2025-05-05 00:35:26.430458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:35:26.432826 | orchestrator | 2025-05-05 00:35:26.435245 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-05 00:35:26.436045 | orchestrator | Monday 05 May 2025 00:35:26 +0000 (0:00:00.803) 0:06:53.330 ************ 2025-05-05 00:35:27.376117 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:27.376494 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:27.377188 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:27.377958 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:27.378781 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:27.379332 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:27.379818 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:27.380501 | orchestrator | 2025-05-05 00:35:27.381136 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-05 00:35:27.381590 | orchestrator | Monday 05 May 2025 00:35:27 +0000 (0:00:00.947) 0:06:54.277 ************ 2025-05-05 00:35:27.784071 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:28.192321 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:28.192903 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:28.194399 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:28.194911 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:28.196034 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:28.196762 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:28.197390 | orchestrator | 2025-05-05 00:35:28.198113 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-05 00:35:28.198711 | orchestrator | Monday 05 May 2025 00:35:28 +0000 (0:00:00.815) 0:06:55.093 ************ 2025-05-05 00:35:28.317092 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:28.375774 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:28.435942 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:35:28.504073 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:35:28.566122 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:35:28.663252 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:35:28.663578 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:35:28.663950 | orchestrator | 2025-05-05 00:35:28.664710 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-05 00:35:28.665571 | orchestrator | Monday 05 May 2025 00:35:28 +0000 (0:00:00.471) 0:06:55.565 ************ 2025-05-05 00:35:30.027444 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:30.027778 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:30.029478 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:30.032844 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:30.037169 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:30.037322 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:30.040351 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:30.040488 | orchestrator | 2025-05-05 00:35:30.040936 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-05 00:35:30.041419 | orchestrator | Monday 05 May 2025 00:35:30 +0000 (0:00:01.364) 0:06:56.929 ************ 2025-05-05 00:35:30.154529 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:30.221638 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:30.280044 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:35:30.341143 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:35:30.414289 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:35:30.497939 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:35:30.498945 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:35:30.499016 | orchestrator | 2025-05-05 00:35:30.499790 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-05 00:35:30.500462 | orchestrator | Monday 05 May 2025 00:35:30 +0000 (0:00:00.470) 0:06:57.399 ************ 2025-05-05 00:35:32.417351 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:32.417709 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:32.418135 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:32.419434 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:32.420404 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:32.421171 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:32.421955 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:32.423192 | orchestrator | 2025-05-05 00:35:32.423947 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-05 00:35:32.424510 | orchestrator | Monday 05 May 2025 00:35:32 +0000 (0:00:01.918) 0:06:59.317 ************ 2025-05-05 00:35:33.695415 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:33.695589 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:33.695832 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:33.696513 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:33.696569 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:33.696956 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:33.697847 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:33.698257 | orchestrator | 2025-05-05 00:35:33.698577 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-05 00:35:33.699083 | orchestrator | Monday 05 May 2025 00:35:33 +0000 (0:00:01.280) 0:07:00.598 ************ 2025-05-05 00:35:35.423016 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:35.423169 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:35.424653 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:35.425545 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:35.426844 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:35.427944 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:35.428421 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:35.429013 | orchestrator | 2025-05-05 00:35:35.429596 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-05 00:35:35.430155 | orchestrator | Monday 05 May 2025 00:35:35 +0000 (0:00:01.725) 0:07:02.324 ************ 2025-05-05 00:35:37.054814 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:37.055104 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:35:37.055175 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:35:37.056924 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:35:37.057186 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:35:37.058644 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:35:37.059359 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:35:37.060068 | orchestrator | 2025-05-05 00:35:37.060857 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-05 00:35:37.061084 | orchestrator | Monday 05 May 2025 00:35:37 +0000 (0:00:01.631) 0:07:03.955 ************ 2025-05-05 00:35:37.580714 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:37.652483 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:38.092950 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:38.093236 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:38.095101 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:38.095922 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:38.095989 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:38.096678 | orchestrator | 2025-05-05 00:35:38.098201 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-05 00:35:38.098662 | orchestrator | Monday 05 May 2025 00:35:38 +0000 (0:00:01.036) 0:07:04.991 ************ 2025-05-05 00:35:38.213412 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:38.277839 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:38.339903 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:35:38.403420 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:35:38.471387 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:35:38.862721 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:35:38.863205 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:35:38.863721 | orchestrator | 2025-05-05 00:35:38.864484 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-05 00:35:38.865869 | orchestrator | Monday 05 May 2025 00:35:38 +0000 (0:00:00.769) 0:07:05.761 ************ 2025-05-05 00:35:38.997535 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:39.058522 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:39.126705 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:35:39.188065 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:35:39.247989 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:35:39.343931 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:35:39.348374 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:35:39.350902 | orchestrator | 2025-05-05 00:35:39.351676 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-05 00:35:39.351761 | orchestrator | Monday 05 May 2025 00:35:39 +0000 (0:00:00.486) 0:07:06.247 ************ 2025-05-05 00:35:39.472158 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:39.538464 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:39.600187 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:39.666182 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:39.734376 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:39.835267 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:39.836412 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:39.838314 | orchestrator | 2025-05-05 00:35:39.839556 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-05 00:35:39.840576 | orchestrator | Monday 05 May 2025 00:35:39 +0000 (0:00:00.488) 0:07:06.736 ************ 2025-05-05 00:35:40.122167 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:40.185216 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:40.258407 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:40.326143 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:40.388876 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:40.480712 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:40.481148 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:40.483000 | orchestrator | 2025-05-05 00:35:40.483466 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-05 00:35:40.483784 | orchestrator | Monday 05 May 2025 00:35:40 +0000 (0:00:00.646) 0:07:07.383 ************ 2025-05-05 00:35:40.607256 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:40.673263 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:40.735136 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:40.797032 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:40.858647 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:40.960883 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:40.961290 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:40.962198 | orchestrator | 2025-05-05 00:35:40.963312 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-05 00:35:40.963873 | orchestrator | Monday 05 May 2025 00:35:40 +0000 (0:00:00.481) 0:07:07.864 ************ 2025-05-05 00:35:46.790375 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:46.790598 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:46.791472 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:46.792212 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:46.793087 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:46.793673 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:46.794272 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:46.794948 | orchestrator | 2025-05-05 00:35:46.795917 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-05 00:35:46.796392 | orchestrator | Monday 05 May 2025 00:35:46 +0000 (0:00:05.826) 0:07:13.691 ************ 2025-05-05 00:35:47.032247 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:35:47.095849 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:35:47.166357 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:35:47.232075 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:35:47.349357 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:35:47.349566 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:35:47.349595 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:35:47.351049 | orchestrator | 2025-05-05 00:35:47.351370 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-05 00:35:47.351457 | orchestrator | Monday 05 May 2025 00:35:47 +0000 (0:00:00.560) 0:07:14.252 ************ 2025-05-05 00:35:48.306382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:35:48.307486 | orchestrator | 2025-05-05 00:35:48.307878 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-05 00:35:48.308978 | orchestrator | Monday 05 May 2025 00:35:48 +0000 (0:00:00.954) 0:07:15.207 ************ 2025-05-05 00:35:50.197589 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:50.197919 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:50.198509 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:50.202311 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:50.203104 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:50.204190 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:50.204832 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:50.206786 | orchestrator | 2025-05-05 00:35:50.207123 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-05 00:35:50.207617 | orchestrator | Monday 05 May 2025 00:35:50 +0000 (0:00:01.891) 0:07:17.098 ************ 2025-05-05 00:35:51.348717 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:51.349157 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:51.349750 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:51.351156 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:51.351923 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:51.352421 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:51.353264 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:51.353456 | orchestrator | 2025-05-05 00:35:51.353978 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-05 00:35:51.354423 | orchestrator | Monday 05 May 2025 00:35:51 +0000 (0:00:01.151) 0:07:18.250 ************ 2025-05-05 00:35:52.173210 | orchestrator | ok: [testbed-manager] 2025-05-05 00:35:52.173422 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:35:52.174137 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:35:52.176111 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:35:52.177611 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:35:52.178505 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:35:52.179722 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:35:52.180452 | orchestrator | 2025-05-05 00:35:52.181468 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-05 00:35:52.182313 | orchestrator | Monday 05 May 2025 00:35:52 +0000 (0:00:00.823) 0:07:19.074 ************ 2025-05-05 00:35:54.084464 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-05 00:35:54.084695 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-05 00:35:54.086334 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-05 00:35:54.089276 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-05 00:35:54.093150 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-05 00:35:54.096552 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-05 00:35:54.098102 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-05 00:35:54.099414 | orchestrator | 2025-05-05 00:35:54.099689 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-05 00:35:54.100626 | orchestrator | Monday 05 May 2025 00:35:54 +0000 (0:00:01.904) 0:07:20.978 ************ 2025-05-05 00:35:54.911494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:35:54.911707 | orchestrator | 2025-05-05 00:35:54.911739 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-05 00:35:54.912274 | orchestrator | Monday 05 May 2025 00:35:54 +0000 (0:00:00.836) 0:07:21.814 ************ 2025-05-05 00:36:04.735247 | orchestrator | changed: [testbed-manager] 2025-05-05 00:36:04.735614 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:04.736150 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:04.737058 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:04.739988 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:04.742072 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:04.742093 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:04.742103 | orchestrator | 2025-05-05 00:36:04.742113 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-05 00:36:04.742127 | orchestrator | Monday 05 May 2025 00:36:04 +0000 (0:00:09.822) 0:07:31.636 ************ 2025-05-05 00:36:06.459261 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:06.460215 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:06.460846 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:06.465290 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:06.465645 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:06.465672 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:06.465687 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:06.465703 | orchestrator | 2025-05-05 00:36:06.465725 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-05 00:36:06.466278 | orchestrator | Monday 05 May 2025 00:36:06 +0000 (0:00:01.723) 0:07:33.360 ************ 2025-05-05 00:36:07.831761 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:07.832119 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:07.833662 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:07.835705 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:07.836275 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:07.837479 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:07.839777 | orchestrator | 2025-05-05 00:36:07.840715 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-05 00:36:07.841094 | orchestrator | Monday 05 May 2025 00:36:07 +0000 (0:00:01.371) 0:07:34.732 ************ 2025-05-05 00:36:09.261516 | orchestrator | changed: [testbed-manager] 2025-05-05 00:36:09.262390 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:09.263590 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:09.265907 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:09.266697 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:09.267404 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:09.268286 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:09.268879 | orchestrator | 2025-05-05 00:36:09.269688 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-05 00:36:09.270396 | orchestrator | 2025-05-05 00:36:09.271771 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-05 00:36:09.272490 | orchestrator | Monday 05 May 2025 00:36:09 +0000 (0:00:01.432) 0:07:36.164 ************ 2025-05-05 00:36:09.394483 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:36:09.454281 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:36:09.523585 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:36:09.584043 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:36:09.642372 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:36:09.759583 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:36:09.760043 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:36:09.761296 | orchestrator | 2025-05-05 00:36:09.761643 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-05 00:36:09.762138 | orchestrator | 2025-05-05 00:36:09.762560 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-05 00:36:09.762900 | orchestrator | Monday 05 May 2025 00:36:09 +0000 (0:00:00.497) 0:07:36.662 ************ 2025-05-05 00:36:11.084029 | orchestrator | changed: [testbed-manager] 2025-05-05 00:36:11.085166 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:11.085551 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:11.086138 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:11.087605 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:11.087903 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:11.088642 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:11.089754 | orchestrator | 2025-05-05 00:36:11.090707 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-05 00:36:11.090789 | orchestrator | Monday 05 May 2025 00:36:11 +0000 (0:00:01.320) 0:07:37.983 ************ 2025-05-05 00:36:12.445763 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:12.449075 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:12.574715 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:12.574828 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:12.574846 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:12.574861 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:12.574875 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:12.574890 | orchestrator | 2025-05-05 00:36:12.574907 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-05 00:36:12.574923 | orchestrator | Monday 05 May 2025 00:36:12 +0000 (0:00:01.361) 0:07:39.344 ************ 2025-05-05 00:36:12.575005 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:36:12.798442 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:36:12.875524 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:36:12.946646 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:36:13.013501 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:36:13.394534 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:36:13.395102 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:36:13.395885 | orchestrator | 2025-05-05 00:36:13.396469 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-05 00:36:13.396849 | orchestrator | Monday 05 May 2025 00:36:13 +0000 (0:00:00.952) 0:07:40.296 ************ 2025-05-05 00:36:14.703572 | orchestrator | changed: [testbed-manager] 2025-05-05 00:36:14.704218 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:14.706176 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:14.707345 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:14.708186 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:14.709112 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:14.709987 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:14.710988 | orchestrator | 2025-05-05 00:36:14.711750 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-05 00:36:14.712477 | orchestrator | 2025-05-05 00:36:14.713306 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-05 00:36:14.713907 | orchestrator | Monday 05 May 2025 00:36:14 +0000 (0:00:01.309) 0:07:41.606 ************ 2025-05-05 00:36:15.633093 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:36:15.633296 | orchestrator | 2025-05-05 00:36:15.633799 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-05 00:36:15.634441 | orchestrator | Monday 05 May 2025 00:36:15 +0000 (0:00:00.928) 0:07:42.534 ************ 2025-05-05 00:36:16.033501 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:16.437778 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:16.438159 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:16.438209 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:16.439122 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:16.440581 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:16.442406 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:16.443154 | orchestrator | 2025-05-05 00:36:16.444985 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-05 00:36:16.446590 | orchestrator | Monday 05 May 2025 00:36:16 +0000 (0:00:00.804) 0:07:43.339 ************ 2025-05-05 00:36:17.545382 | orchestrator | changed: [testbed-manager] 2025-05-05 00:36:17.546364 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:17.546418 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:17.547176 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:17.548099 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:17.549665 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:17.550520 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:17.551196 | orchestrator | 2025-05-05 00:36:17.551744 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-05 00:36:17.552700 | orchestrator | Monday 05 May 2025 00:36:17 +0000 (0:00:01.105) 0:07:44.444 ************ 2025-05-05 00:36:18.500419 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:36:18.501413 | orchestrator | 2025-05-05 00:36:18.502081 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-05 00:36:18.503992 | orchestrator | Monday 05 May 2025 00:36:18 +0000 (0:00:00.956) 0:07:45.400 ************ 2025-05-05 00:36:18.887639 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:19.326158 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:19.326638 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:19.326685 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:19.329452 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:19.330170 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:19.330218 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:19.331070 | orchestrator | 2025-05-05 00:36:19.331854 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-05 00:36:19.332570 | orchestrator | Monday 05 May 2025 00:36:19 +0000 (0:00:00.826) 0:07:46.227 ************ 2025-05-05 00:36:20.359019 | orchestrator | changed: [testbed-manager] 2025-05-05 00:36:20.360278 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:20.360336 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:20.361410 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:20.362729 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:20.363294 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:20.364206 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:20.364864 | orchestrator | 2025-05-05 00:36:20.365680 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:36:20.366187 | orchestrator | 2025-05-05 00:36:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:36:20.367200 | orchestrator | 2025-05-05 00:36:20 | INFO  | Please wait and do not abort execution. 2025-05-05 00:36:20.367247 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-05 00:36:20.367571 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-05 00:36:20.368123 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-05 00:36:20.369015 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-05 00:36:20.369661 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-05 00:36:20.370810 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-05 00:36:20.371319 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-05 00:36:20.371791 | orchestrator | 2025-05-05 00:36:20.372348 | orchestrator | Monday 05 May 2025 00:36:20 +0000 (0:00:01.032) 0:07:47.259 ************ 2025-05-05 00:36:20.373041 | orchestrator | =============================================================================== 2025-05-05 00:36:20.373415 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.38s 2025-05-05 00:36:20.373766 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.93s 2025-05-05 00:36:20.374132 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.03s 2025-05-05 00:36:20.374559 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.52s 2025-05-05 00:36:20.374796 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.63s 2025-05-05 00:36:20.375157 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.52s 2025-05-05 00:36:20.375496 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.41s 2025-05-05 00:36:20.375867 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.40s 2025-05-05 00:36:20.376233 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.82s 2025-05-05 00:36:20.376852 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.55s 2025-05-05 00:36:20.377977 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.01s 2025-05-05 00:36:20.378260 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.83s 2025-05-05 00:36:20.378607 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.61s 2025-05-05 00:36:20.378643 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.52s 2025-05-05 00:36:20.378848 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.28s 2025-05-05 00:36:20.379292 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.28s 2025-05-05 00:36:20.379530 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.83s 2025-05-05 00:36:20.379824 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.78s 2025-05-05 00:36:20.380236 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.69s 2025-05-05 00:36:20.380486 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.58s 2025-05-05 00:36:20.978567 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-05 00:36:22.740515 | orchestrator | + osism apply network 2025-05-05 00:36:22.740666 | orchestrator | 2025-05-05 00:36:22 | INFO  | Task d19319bf-38b4-4b0e-9fc7-db5a34e66331 (network) was prepared for execution. 2025-05-05 00:36:25.972310 | orchestrator | 2025-05-05 00:36:22 | INFO  | It takes a moment until task d19319bf-38b4-4b0e-9fc7-db5a34e66331 (network) has been started and output is visible here. 2025-05-05 00:36:25.972539 | orchestrator | 2025-05-05 00:36:25.972663 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-05 00:36:25.977002 | orchestrator | 2025-05-05 00:36:26.114758 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-05 00:36:26.114904 | orchestrator | Monday 05 May 2025 00:36:25 +0000 (0:00:00.196) 0:00:00.196 ************ 2025-05-05 00:36:26.115027 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:26.190495 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:26.265509 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:26.342332 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:26.415807 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:26.622417 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:26.622855 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:26.623551 | orchestrator | 2025-05-05 00:36:26.627090 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-05 00:36:27.745456 | orchestrator | Monday 05 May 2025 00:36:26 +0000 (0:00:00.649) 0:00:00.846 ************ 2025-05-05 00:36:27.745693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:36:27.745782 | orchestrator | 2025-05-05 00:36:27.745808 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-05 00:36:27.746098 | orchestrator | Monday 05 May 2025 00:36:27 +0000 (0:00:01.122) 0:00:01.968 ************ 2025-05-05 00:36:29.872430 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:29.874339 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:29.874383 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:29.874399 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:29.874414 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:29.874437 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:29.875915 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:29.878119 | orchestrator | 2025-05-05 00:36:31.733106 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-05 00:36:31.733266 | orchestrator | Monday 05 May 2025 00:36:29 +0000 (0:00:02.123) 0:00:04.092 ************ 2025-05-05 00:36:31.733308 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:31.734145 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:31.734834 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:31.735441 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:31.736231 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:31.737143 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:31.737815 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:31.740141 | orchestrator | 2025-05-05 00:36:31.741398 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-05 00:36:32.212749 | orchestrator | Monday 05 May 2025 00:36:31 +0000 (0:00:01.861) 0:00:05.954 ************ 2025-05-05 00:36:32.212900 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-05 00:36:32.833884 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-05 00:36:32.834293 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-05 00:36:32.836022 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-05 00:36:32.836618 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-05 00:36:32.841016 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-05 00:36:32.841301 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-05 00:36:32.842082 | orchestrator | 2025-05-05 00:36:32.842710 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-05 00:36:32.843251 | orchestrator | Monday 05 May 2025 00:36:32 +0000 (0:00:01.103) 0:00:07.057 ************ 2025-05-05 00:36:34.551694 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 00:36:34.551893 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 00:36:34.552078 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-05 00:36:34.553223 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-05 00:36:34.556444 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-05 00:36:34.557786 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-05 00:36:34.558468 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-05 00:36:34.559001 | orchestrator | 2025-05-05 00:36:34.559430 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-05 00:36:34.560226 | orchestrator | Monday 05 May 2025 00:36:34 +0000 (0:00:01.719) 0:00:08.776 ************ 2025-05-05 00:36:36.253343 | orchestrator | changed: [testbed-manager] 2025-05-05 00:36:36.255251 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:36.258225 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:36.258326 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:36.258350 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:36.258366 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:36.258387 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:36.258951 | orchestrator | 2025-05-05 00:36:36.259724 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-05 00:36:36.260022 | orchestrator | Monday 05 May 2025 00:36:36 +0000 (0:00:01.693) 0:00:10.470 ************ 2025-05-05 00:36:36.842424 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 00:36:36.926434 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-05 00:36:37.391804 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 00:36:37.392343 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-05 00:36:37.393367 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-05 00:36:37.394284 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-05 00:36:37.395049 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-05 00:36:37.395670 | orchestrator | 2025-05-05 00:36:37.396529 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-05 00:36:37.397454 | orchestrator | Monday 05 May 2025 00:36:37 +0000 (0:00:01.146) 0:00:11.617 ************ 2025-05-05 00:36:38.017694 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:38.204719 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:38.359291 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:38.837317 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:38.838178 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:38.839065 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:38.841935 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:38.842703 | orchestrator | 2025-05-05 00:36:38.842743 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-05 00:36:38.842768 | orchestrator | Monday 05 May 2025 00:36:38 +0000 (0:00:01.438) 0:00:13.056 ************ 2025-05-05 00:36:39.038292 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:36:39.158492 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:36:39.252993 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:36:39.375254 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:36:39.481262 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:36:39.899747 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:36:39.900018 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:36:39.900812 | orchestrator | 2025-05-05 00:36:39.901773 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-05 00:36:39.902625 | orchestrator | Monday 05 May 2025 00:36:39 +0000 (0:00:01.061) 0:00:14.118 ************ 2025-05-05 00:36:42.210164 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:42.210372 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:42.213445 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:42.213544 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:42.213564 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:42.213583 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:42.214011 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:42.215943 | orchestrator | 2025-05-05 00:36:42.216311 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-05 00:36:42.216961 | orchestrator | Monday 05 May 2025 00:36:42 +0000 (0:00:02.316) 0:00:16.434 ************ 2025-05-05 00:36:44.096493 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-05 00:36:44.098706 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-05 00:36:44.100351 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-05 00:36:44.100537 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-05 00:36:44.102111 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-05 00:36:44.103202 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-05 00:36:44.104391 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-05 00:36:44.105663 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-05 00:36:44.106112 | orchestrator | 2025-05-05 00:36:44.106940 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-05 00:36:44.107618 | orchestrator | Monday 05 May 2025 00:36:44 +0000 (0:00:01.882) 0:00:18.316 ************ 2025-05-05 00:36:46.484779 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:46.485052 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:36:46.485090 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:36:46.485865 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:36:46.487102 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:36:46.488124 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:36:46.488221 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:36:46.489558 | orchestrator | 2025-05-05 00:36:46.489859 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-05 00:36:46.489893 | orchestrator | Monday 05 May 2025 00:36:46 +0000 (0:00:02.391) 0:00:20.708 ************ 2025-05-05 00:36:47.928990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:36:47.929648 | orchestrator | 2025-05-05 00:36:47.929702 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-05 00:36:47.930696 | orchestrator | Monday 05 May 2025 00:36:47 +0000 (0:00:01.441) 0:00:22.149 ************ 2025-05-05 00:36:48.445358 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:48.904219 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:48.905040 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:48.906369 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:48.907616 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:48.908544 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:48.909195 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:48.910103 | orchestrator | 2025-05-05 00:36:48.910694 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-05 00:36:48.911512 | orchestrator | Monday 05 May 2025 00:36:48 +0000 (0:00:00.978) 0:00:23.128 ************ 2025-05-05 00:36:49.065351 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:49.146624 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:36:49.377961 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:36:49.462983 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:36:49.549173 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:36:49.690528 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:36:49.691533 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:36:49.692461 | orchestrator | 2025-05-05 00:36:49.693796 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-05 00:36:49.694524 | orchestrator | Monday 05 May 2025 00:36:49 +0000 (0:00:00.783) 0:00:23.911 ************ 2025-05-05 00:36:50.111715 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-05 00:36:50.112527 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-05 00:36:50.209024 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-05 00:36:50.209314 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-05 00:36:50.708695 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-05 00:36:50.709169 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-05 00:36:50.709217 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-05 00:36:50.709996 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-05 00:36:50.710603 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-05 00:36:50.711366 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-05 00:36:50.711872 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-05 00:36:50.712568 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-05 00:36:50.713371 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-05 00:36:50.713784 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-05 00:36:50.714719 | orchestrator | 2025-05-05 00:36:50.714996 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-05 00:36:50.718258 | orchestrator | Monday 05 May 2025 00:36:50 +0000 (0:00:01.020) 0:00:24.932 ************ 2025-05-05 00:36:51.042389 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:36:51.144414 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:36:51.226551 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:36:51.308314 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:36:51.398160 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:36:52.537296 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:36:52.540231 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:36:52.696150 | orchestrator | 2025-05-05 00:36:52.696299 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-05 00:36:52.696327 | orchestrator | Monday 05 May 2025 00:36:52 +0000 (0:00:01.825) 0:00:26.757 ************ 2025-05-05 00:36:52.696360 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:36:52.776461 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:36:53.027414 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:36:53.109807 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:36:53.190351 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:36:53.225485 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:36:53.226113 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:36:53.226740 | orchestrator | 2025-05-05 00:36:53.227804 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:36:53.228155 | orchestrator | 2025-05-05 00:36:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:36:53.228259 | orchestrator | 2025-05-05 00:36:53 | INFO  | Please wait and do not abort execution. 2025-05-05 00:36:53.229121 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:36:53.230408 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:36:53.231032 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:36:53.231876 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:36:53.232571 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:36:53.233210 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:36:53.233657 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:36:53.234440 | orchestrator | 2025-05-05 00:36:53.235024 | orchestrator | Monday 05 May 2025 00:36:53 +0000 (0:00:00.695) 0:00:27.453 ************ 2025-05-05 00:36:53.235778 | orchestrator | =============================================================================== 2025-05-05 00:36:53.236446 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 2.39s 2025-05-05 00:36:53.237082 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.32s 2025-05-05 00:36:53.237587 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.12s 2025-05-05 00:36:53.238228 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.88s 2025-05-05 00:36:53.238639 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.86s 2025-05-05 00:36:53.239038 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.83s 2025-05-05 00:36:53.239370 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.72s 2025-05-05 00:36:53.239684 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.69s 2025-05-05 00:36:53.240039 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.44s 2025-05-05 00:36:53.240400 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.44s 2025-05-05 00:36:53.240765 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.15s 2025-05-05 00:36:53.241076 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.12s 2025-05-05 00:36:53.241419 | orchestrator | osism.commons.network : Create required directories --------------------- 1.10s 2025-05-05 00:36:53.241763 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 1.06s 2025-05-05 00:36:53.242108 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.02s 2025-05-05 00:36:53.242320 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-05-05 00:36:53.243112 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.78s 2025-05-05 00:36:53.243482 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.70s 2025-05-05 00:36:53.243567 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.65s 2025-05-05 00:36:53.708190 | orchestrator | + osism apply wireguard 2025-05-05 00:36:55.112227 | orchestrator | 2025-05-05 00:36:55 | INFO  | Task 73bcbde3-074a-47aa-bed4-a41ecbe27020 (wireguard) was prepared for execution. 2025-05-05 00:36:58.145550 | orchestrator | 2025-05-05 00:36:55 | INFO  | It takes a moment until task 73bcbde3-074a-47aa-bed4-a41ecbe27020 (wireguard) has been started and output is visible here. 2025-05-05 00:36:58.145715 | orchestrator | 2025-05-05 00:36:58.145792 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-05 00:36:58.148372 | orchestrator | 2025-05-05 00:36:58.149320 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-05 00:36:58.150834 | orchestrator | Monday 05 May 2025 00:36:58 +0000 (0:00:00.164) 0:00:00.164 ************ 2025-05-05 00:36:59.613083 | orchestrator | ok: [testbed-manager] 2025-05-05 00:36:59.614466 | orchestrator | 2025-05-05 00:37:05.762342 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-05 00:37:05.762537 | orchestrator | Monday 05 May 2025 00:36:59 +0000 (0:00:01.469) 0:00:01.633 ************ 2025-05-05 00:37:05.762581 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:05.762660 | orchestrator | 2025-05-05 00:37:05.763315 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-05 00:37:05.763546 | orchestrator | Monday 05 May 2025 00:37:05 +0000 (0:00:06.148) 0:00:07.782 ************ 2025-05-05 00:37:06.292223 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:06.292399 | orchestrator | 2025-05-05 00:37:06.292501 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-05 00:37:06.293061 | orchestrator | Monday 05 May 2025 00:37:06 +0000 (0:00:00.531) 0:00:08.313 ************ 2025-05-05 00:37:06.735613 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:06.736271 | orchestrator | 2025-05-05 00:37:06.736353 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-05 00:37:06.736442 | orchestrator | Monday 05 May 2025 00:37:06 +0000 (0:00:00.443) 0:00:08.756 ************ 2025-05-05 00:37:07.241395 | orchestrator | ok: [testbed-manager] 2025-05-05 00:37:07.242237 | orchestrator | 2025-05-05 00:37:07.243355 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-05 00:37:07.243852 | orchestrator | Monday 05 May 2025 00:37:07 +0000 (0:00:00.504) 0:00:09.261 ************ 2025-05-05 00:37:07.768343 | orchestrator | ok: [testbed-manager] 2025-05-05 00:37:07.768596 | orchestrator | 2025-05-05 00:37:07.769173 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-05 00:37:07.769874 | orchestrator | Monday 05 May 2025 00:37:07 +0000 (0:00:00.528) 0:00:09.790 ************ 2025-05-05 00:37:08.204942 | orchestrator | ok: [testbed-manager] 2025-05-05 00:37:08.205519 | orchestrator | 2025-05-05 00:37:08.206268 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-05 00:37:08.206852 | orchestrator | Monday 05 May 2025 00:37:08 +0000 (0:00:00.434) 0:00:10.225 ************ 2025-05-05 00:37:09.368625 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:09.369087 | orchestrator | 2025-05-05 00:37:09.369111 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-05 00:37:09.369811 | orchestrator | Monday 05 May 2025 00:37:09 +0000 (0:00:01.163) 0:00:11.388 ************ 2025-05-05 00:37:10.235377 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-05 00:37:10.236260 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:10.236324 | orchestrator | 2025-05-05 00:37:10.237440 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-05 00:37:10.237726 | orchestrator | Monday 05 May 2025 00:37:10 +0000 (0:00:00.865) 0:00:12.254 ************ 2025-05-05 00:37:11.929630 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:11.930174 | orchestrator | 2025-05-05 00:37:11.932338 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-05 00:37:11.934064 | orchestrator | Monday 05 May 2025 00:37:11 +0000 (0:00:01.694) 0:00:13.949 ************ 2025-05-05 00:37:12.828717 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:12.829536 | orchestrator | 2025-05-05 00:37:12.830246 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:37:12.830997 | orchestrator | 2025-05-05 00:37:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:37:12.831870 | orchestrator | 2025-05-05 00:37:12 | INFO  | Please wait and do not abort execution. 2025-05-05 00:37:12.831941 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:37:12.832456 | orchestrator | 2025-05-05 00:37:12.833063 | orchestrator | Monday 05 May 2025 00:37:12 +0000 (0:00:00.901) 0:00:14.851 ************ 2025-05-05 00:37:12.833607 | orchestrator | =============================================================================== 2025-05-05 00:37:12.834079 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.15s 2025-05-05 00:37:12.834773 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-05-05 00:37:12.834980 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.47s 2025-05-05 00:37:12.835459 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2025-05-05 00:37:12.835749 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-05-05 00:37:12.836722 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2025-05-05 00:37:12.836824 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-05-05 00:37:12.837293 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-05-05 00:37:12.838007 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2025-05-05 00:37:12.838601 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-05-05 00:37:12.838996 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-05-05 00:37:13.313992 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-05 00:37:13.348604 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-05 00:37:13.434281 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-05 00:37:13.434416 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 175 0 --:--:-- --:--:-- --:--:-- 176 2025-05-05 00:37:13.451203 | orchestrator | + osism apply --environment custom workarounds 2025-05-05 00:37:14.827784 | orchestrator | 2025-05-05 00:37:14 | INFO  | Trying to run play workarounds in environment custom 2025-05-05 00:37:14.874176 | orchestrator | 2025-05-05 00:37:14 | INFO  | Task 5c7a016a-eda2-4413-ab5d-9104658bfe56 (workarounds) was prepared for execution. 2025-05-05 00:37:17.967350 | orchestrator | 2025-05-05 00:37:14 | INFO  | It takes a moment until task 5c7a016a-eda2-4413-ab5d-9104658bfe56 (workarounds) has been started and output is visible here. 2025-05-05 00:37:17.967505 | orchestrator | 2025-05-05 00:37:17.970975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:37:18.129669 | orchestrator | 2025-05-05 00:37:18.129783 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-05 00:37:18.129816 | orchestrator | Monday 05 May 2025 00:37:17 +0000 (0:00:00.134) 0:00:00.134 ************ 2025-05-05 00:37:18.129842 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-05 00:37:18.210501 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-05 00:37:18.295357 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-05 00:37:18.376859 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-05 00:37:18.458149 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-05 00:37:18.706151 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-05 00:37:18.706342 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-05 00:37:18.707673 | orchestrator | 2025-05-05 00:37:18.708578 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-05 00:37:18.710152 | orchestrator | 2025-05-05 00:37:18.710802 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-05 00:37:18.711845 | orchestrator | Monday 05 May 2025 00:37:18 +0000 (0:00:00.740) 0:00:00.875 ************ 2025-05-05 00:37:21.205696 | orchestrator | ok: [testbed-manager] 2025-05-05 00:37:21.205980 | orchestrator | 2025-05-05 00:37:21.206718 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-05 00:37:21.207907 | orchestrator | 2025-05-05 00:37:21.208710 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-05 00:37:21.210203 | orchestrator | Monday 05 May 2025 00:37:21 +0000 (0:00:02.498) 0:00:03.373 ************ 2025-05-05 00:37:23.000056 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:37:23.000245 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:37:23.001103 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:37:23.004289 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:37:23.004706 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:37:23.004732 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:37:23.004747 | orchestrator | 2025-05-05 00:37:23.004768 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-05 00:37:23.005172 | orchestrator | 2025-05-05 00:37:23.005474 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-05 00:37:23.006228 | orchestrator | Monday 05 May 2025 00:37:22 +0000 (0:00:01.794) 0:00:05.168 ************ 2025-05-05 00:37:24.466524 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-05 00:37:24.466719 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-05 00:37:24.467958 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-05 00:37:24.469585 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-05 00:37:24.471467 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-05 00:37:24.472842 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-05 00:37:24.473287 | orchestrator | 2025-05-05 00:37:24.474579 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-05 00:37:24.475310 | orchestrator | Monday 05 May 2025 00:37:24 +0000 (0:00:01.465) 0:00:06.633 ************ 2025-05-05 00:37:28.357421 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:37:28.357653 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:37:28.358547 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:37:28.358691 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:37:28.359205 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:37:28.360424 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:37:28.361168 | orchestrator | 2025-05-05 00:37:28.361983 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-05 00:37:28.362755 | orchestrator | Monday 05 May 2025 00:37:28 +0000 (0:00:03.893) 0:00:10.526 ************ 2025-05-05 00:37:28.513560 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:37:28.597563 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:37:28.680570 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:37:28.955448 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:37:29.111167 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:37:29.111357 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:37:29.111771 | orchestrator | 2025-05-05 00:37:29.112790 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-05 00:37:29.113190 | orchestrator | 2025-05-05 00:37:29.113694 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-05 00:37:29.114194 | orchestrator | Monday 05 May 2025 00:37:29 +0000 (0:00:00.753) 0:00:11.280 ************ 2025-05-05 00:37:30.751536 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:30.753481 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:37:30.754543 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:37:30.755482 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:37:30.756536 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:37:30.757475 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:37:30.758592 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:37:30.760000 | orchestrator | 2025-05-05 00:37:30.761046 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-05 00:37:30.761778 | orchestrator | Monday 05 May 2025 00:37:30 +0000 (0:00:01.639) 0:00:12.919 ************ 2025-05-05 00:37:32.401932 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:32.402179 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:37:32.403432 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:37:32.404950 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:37:32.405570 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:37:32.406600 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:37:32.406811 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:37:32.407825 | orchestrator | 2025-05-05 00:37:32.409663 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-05 00:37:32.410220 | orchestrator | Monday 05 May 2025 00:37:32 +0000 (0:00:01.648) 0:00:14.567 ************ 2025-05-05 00:37:33.887378 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:37:33.887599 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:37:33.887630 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:37:33.888603 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:37:33.888975 | orchestrator | ok: [testbed-manager] 2025-05-05 00:37:33.889680 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:37:33.890394 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:37:33.890764 | orchestrator | 2025-05-05 00:37:33.891318 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-05 00:37:33.891792 | orchestrator | Monday 05 May 2025 00:37:33 +0000 (0:00:01.489) 0:00:16.057 ************ 2025-05-05 00:37:35.658804 | orchestrator | changed: [testbed-manager] 2025-05-05 00:37:35.659117 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:37:35.660064 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:37:35.660843 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:37:35.661532 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:37:35.662927 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:37:35.663503 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:37:35.664305 | orchestrator | 2025-05-05 00:37:35.665568 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-05 00:37:35.665817 | orchestrator | Monday 05 May 2025 00:37:35 +0000 (0:00:01.769) 0:00:17.827 ************ 2025-05-05 00:37:35.811355 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:37:35.887020 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:37:35.962328 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:37:36.032401 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:37:36.275366 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:37:36.428622 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:37:36.429430 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:37:36.430356 | orchestrator | 2025-05-05 00:37:36.432280 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-05 00:37:36.432483 | orchestrator | 2025-05-05 00:37:36.433431 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-05 00:37:36.434113 | orchestrator | Monday 05 May 2025 00:37:36 +0000 (0:00:00.770) 0:00:18.598 ************ 2025-05-05 00:37:38.939256 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:37:38.939647 | orchestrator | ok: [testbed-manager] 2025-05-05 00:37:38.940990 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:37:38.941317 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:37:38.942490 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:37:38.944298 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:37:38.945643 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:37:38.946094 | orchestrator | 2025-05-05 00:37:38.947087 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:37:38.947576 | orchestrator | 2025-05-05 00:37:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:37:38.947811 | orchestrator | 2025-05-05 00:37:38 | INFO  | Please wait and do not abort execution. 2025-05-05 00:37:38.948919 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:37:38.949910 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:38.950535 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:38.951871 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:38.952124 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:38.952828 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:38.953715 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:38.953921 | orchestrator | 2025-05-05 00:37:38.954758 | orchestrator | Monday 05 May 2025 00:37:38 +0000 (0:00:02.511) 0:00:21.109 ************ 2025-05-05 00:37:38.955007 | orchestrator | =============================================================================== 2025-05-05 00:37:38.955548 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.89s 2025-05-05 00:37:38.955638 | orchestrator | Install python3-docker -------------------------------------------------- 2.51s 2025-05-05 00:37:38.956215 | orchestrator | Apply netplan configuration --------------------------------------------- 2.50s 2025-05-05 00:37:38.956946 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2025-05-05 00:37:38.957888 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2025-05-05 00:37:38.958310 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2025-05-05 00:37:38.958341 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-05-05 00:37:38.958742 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-05-05 00:37:38.959216 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2025-05-05 00:37:38.959683 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.77s 2025-05-05 00:37:38.960281 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2025-05-05 00:37:38.960588 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.74s 2025-05-05 00:37:39.478766 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-05 00:37:40.984408 | orchestrator | 2025-05-05 00:37:40 | INFO  | Task 9ee02cd2-5306-4814-b37b-da4e8a594d20 (reboot) was prepared for execution. 2025-05-05 00:37:43.976019 | orchestrator | 2025-05-05 00:37:40 | INFO  | It takes a moment until task 9ee02cd2-5306-4814-b37b-da4e8a594d20 (reboot) has been started and output is visible here. 2025-05-05 00:37:43.976214 | orchestrator | 2025-05-05 00:37:43.976331 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-05 00:37:43.977109 | orchestrator | 2025-05-05 00:37:43.979742 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-05 00:37:43.980902 | orchestrator | Monday 05 May 2025 00:37:43 +0000 (0:00:00.142) 0:00:00.142 ************ 2025-05-05 00:37:44.085935 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:37:44.086235 | orchestrator | 2025-05-05 00:37:44.086864 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-05 00:37:44.087660 | orchestrator | Monday 05 May 2025 00:37:44 +0000 (0:00:00.112) 0:00:00.255 ************ 2025-05-05 00:37:45.012789 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:37:45.013481 | orchestrator | 2025-05-05 00:37:45.014553 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-05 00:37:45.016458 | orchestrator | Monday 05 May 2025 00:37:45 +0000 (0:00:00.926) 0:00:01.181 ************ 2025-05-05 00:37:45.131167 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:37:45.131550 | orchestrator | 2025-05-05 00:37:45.132370 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-05 00:37:45.133435 | orchestrator | 2025-05-05 00:37:45.134595 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-05 00:37:45.135189 | orchestrator | Monday 05 May 2025 00:37:45 +0000 (0:00:00.118) 0:00:01.300 ************ 2025-05-05 00:37:45.222312 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:37:45.222471 | orchestrator | 2025-05-05 00:37:45.223808 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-05 00:37:45.224204 | orchestrator | Monday 05 May 2025 00:37:45 +0000 (0:00:00.090) 0:00:01.390 ************ 2025-05-05 00:37:45.831673 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:37:45.832692 | orchestrator | 2025-05-05 00:37:45.832750 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-05 00:37:45.832922 | orchestrator | Monday 05 May 2025 00:37:45 +0000 (0:00:00.610) 0:00:02.000 ************ 2025-05-05 00:37:45.934336 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:37:45.936423 | orchestrator | 2025-05-05 00:37:45.937185 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-05 00:37:45.937244 | orchestrator | 2025-05-05 00:37:45.938172 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-05 00:37:45.938546 | orchestrator | Monday 05 May 2025 00:37:45 +0000 (0:00:00.100) 0:00:02.100 ************ 2025-05-05 00:37:46.043518 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:37:46.043822 | orchestrator | 2025-05-05 00:37:46.044152 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-05 00:37:46.045454 | orchestrator | Monday 05 May 2025 00:37:46 +0000 (0:00:00.111) 0:00:02.211 ************ 2025-05-05 00:37:46.788366 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:37:46.789152 | orchestrator | 2025-05-05 00:37:46.790164 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-05 00:37:46.792678 | orchestrator | Monday 05 May 2025 00:37:46 +0000 (0:00:00.745) 0:00:02.957 ************ 2025-05-05 00:37:46.914726 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:37:46.915233 | orchestrator | 2025-05-05 00:37:46.916129 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-05 00:37:46.916928 | orchestrator | 2025-05-05 00:37:46.917680 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-05 00:37:46.918296 | orchestrator | Monday 05 May 2025 00:37:46 +0000 (0:00:00.125) 0:00:03.083 ************ 2025-05-05 00:37:47.017215 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:37:47.017762 | orchestrator | 2025-05-05 00:37:47.018744 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-05 00:37:47.020465 | orchestrator | Monday 05 May 2025 00:37:47 +0000 (0:00:00.103) 0:00:03.186 ************ 2025-05-05 00:37:47.639371 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:37:47.639575 | orchestrator | 2025-05-05 00:37:47.639914 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-05 00:37:47.640523 | orchestrator | Monday 05 May 2025 00:37:47 +0000 (0:00:00.620) 0:00:03.807 ************ 2025-05-05 00:37:47.752736 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:37:47.753300 | orchestrator | 2025-05-05 00:37:47.754443 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-05 00:37:47.755178 | orchestrator | 2025-05-05 00:37:47.756120 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-05 00:37:47.757090 | orchestrator | Monday 05 May 2025 00:37:47 +0000 (0:00:00.111) 0:00:03.919 ************ 2025-05-05 00:37:47.854880 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:37:47.855640 | orchestrator | 2025-05-05 00:37:47.856256 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-05 00:37:47.857084 | orchestrator | Monday 05 May 2025 00:37:47 +0000 (0:00:00.105) 0:00:04.024 ************ 2025-05-05 00:37:48.535712 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:37:48.536642 | orchestrator | 2025-05-05 00:37:48.537218 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-05 00:37:48.538128 | orchestrator | Monday 05 May 2025 00:37:48 +0000 (0:00:00.679) 0:00:04.703 ************ 2025-05-05 00:37:48.661325 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:37:48.661557 | orchestrator | 2025-05-05 00:37:48.663361 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-05 00:37:48.665001 | orchestrator | 2025-05-05 00:37:48.665442 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-05 00:37:48.666455 | orchestrator | Monday 05 May 2025 00:37:48 +0000 (0:00:00.124) 0:00:04.828 ************ 2025-05-05 00:37:48.763140 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:37:48.763337 | orchestrator | 2025-05-05 00:37:48.764317 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-05 00:37:48.765119 | orchestrator | Monday 05 May 2025 00:37:48 +0000 (0:00:00.103) 0:00:04.932 ************ 2025-05-05 00:37:49.454264 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:37:49.455338 | orchestrator | 2025-05-05 00:37:49.456058 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-05 00:37:49.456906 | orchestrator | Monday 05 May 2025 00:37:49 +0000 (0:00:00.690) 0:00:05.623 ************ 2025-05-05 00:37:49.495690 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:37:49.495922 | orchestrator | 2025-05-05 00:37:49.498575 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:37:49.499193 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:49.499239 | orchestrator | 2025-05-05 00:37:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:37:49.500163 | orchestrator | 2025-05-05 00:37:49 | INFO  | Please wait and do not abort execution. 2025-05-05 00:37:49.500227 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:49.500804 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:49.501428 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:49.502002 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:49.503053 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:37:49.504001 | orchestrator | 2025-05-05 00:37:49.504780 | orchestrator | Monday 05 May 2025 00:37:49 +0000 (0:00:00.042) 0:00:05.665 ************ 2025-05-05 00:37:49.505417 | orchestrator | =============================================================================== 2025-05-05 00:37:49.506414 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2025-05-05 00:37:49.507280 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2025-05-05 00:37:49.507558 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2025-05-05 00:37:50.016383 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-05 00:37:51.458432 | orchestrator | 2025-05-05 00:37:51 | INFO  | Task 6c8efd4a-abd1-4ae2-b16c-640d350e78bb (wait-for-connection) was prepared for execution. 2025-05-05 00:37:54.488390 | orchestrator | 2025-05-05 00:37:51 | INFO  | It takes a moment until task 6c8efd4a-abd1-4ae2-b16c-640d350e78bb (wait-for-connection) has been started and output is visible here. 2025-05-05 00:37:54.488600 | orchestrator | 2025-05-05 00:37:54.488700 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-05 00:37:54.489028 | orchestrator | 2025-05-05 00:37:54.491462 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-05 00:37:54.493256 | orchestrator | Monday 05 May 2025 00:37:54 +0000 (0:00:00.171) 0:00:00.171 ************ 2025-05-05 00:38:08.165479 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:38:08.166192 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:38:08.166227 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:38:08.166244 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:38:08.166259 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:38:08.166281 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:38:08.166939 | orchestrator | 2025-05-05 00:38:08.167481 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:38:08.167882 | orchestrator | 2025-05-05 00:38:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:38:08.168889 | orchestrator | 2025-05-05 00:38:08 | INFO  | Please wait and do not abort execution. 2025-05-05 00:38:08.168928 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:38:08.169208 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:38:08.169582 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:38:08.169927 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:38:08.170842 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:38:08.171024 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:38:08.171470 | orchestrator | 2025-05-05 00:38:08.171495 | orchestrator | Monday 05 May 2025 00:38:08 +0000 (0:00:13.675) 0:00:13.847 ************ 2025-05-05 00:38:08.171710 | orchestrator | =============================================================================== 2025-05-05 00:38:08.172015 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.68s 2025-05-05 00:38:08.636633 | orchestrator | + osism apply hddtemp 2025-05-05 00:38:10.107873 | orchestrator | 2025-05-05 00:38:10 | INFO  | Task 9fb1f2fc-9174-43d0-b2b9-a23163502102 (hddtemp) was prepared for execution. 2025-05-05 00:38:13.204219 | orchestrator | 2025-05-05 00:38:10 | INFO  | It takes a moment until task 9fb1f2fc-9174-43d0-b2b9-a23163502102 (hddtemp) has been started and output is visible here. 2025-05-05 00:38:13.204404 | orchestrator | 2025-05-05 00:38:13.205460 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-05 00:38:13.206838 | orchestrator | 2025-05-05 00:38:13.208899 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-05 00:38:13.209827 | orchestrator | Monday 05 May 2025 00:38:13 +0000 (0:00:00.205) 0:00:00.205 ************ 2025-05-05 00:38:13.353235 | orchestrator | ok: [testbed-manager] 2025-05-05 00:38:13.430341 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:38:13.505663 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:38:13.578364 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:38:13.653842 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:38:13.885118 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:38:13.885331 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:38:13.888661 | orchestrator | 2025-05-05 00:38:15.133030 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-05 00:38:15.133176 | orchestrator | Monday 05 May 2025 00:38:13 +0000 (0:00:00.681) 0:00:00.886 ************ 2025-05-05 00:38:15.133213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:38:15.133285 | orchestrator | 2025-05-05 00:38:15.133307 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-05 00:38:15.133521 | orchestrator | Monday 05 May 2025 00:38:15 +0000 (0:00:01.245) 0:00:02.132 ************ 2025-05-05 00:38:17.345326 | orchestrator | ok: [testbed-manager] 2025-05-05 00:38:17.345501 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:38:17.346319 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:38:17.348569 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:38:17.348999 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:38:17.353046 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:38:17.353320 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:38:17.353348 | orchestrator | 2025-05-05 00:38:17.357554 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-05 00:38:17.997874 | orchestrator | Monday 05 May 2025 00:38:17 +0000 (0:00:02.217) 0:00:04.349 ************ 2025-05-05 00:38:17.998087 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:38:18.098642 | orchestrator | changed: [testbed-manager] 2025-05-05 00:38:18.616221 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:38:18.617105 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:38:18.620585 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:38:18.620828 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:38:18.620853 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:38:18.620868 | orchestrator | 2025-05-05 00:38:18.620884 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-05 00:38:18.620905 | orchestrator | Monday 05 May 2025 00:38:18 +0000 (0:00:01.266) 0:00:05.615 ************ 2025-05-05 00:38:20.810988 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:38:20.811561 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:38:20.812572 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:38:20.812996 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:38:20.816613 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:38:20.816877 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:38:20.816902 | orchestrator | ok: [testbed-manager] 2025-05-05 00:38:20.816917 | orchestrator | 2025-05-05 00:38:20.816938 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-05 00:38:20.817667 | orchestrator | Monday 05 May 2025 00:38:20 +0000 (0:00:02.196) 0:00:07.812 ************ 2025-05-05 00:38:21.069610 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:38:21.154335 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:38:21.242224 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:38:21.322072 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:38:21.436978 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:38:21.437541 | orchestrator | changed: [testbed-manager] 2025-05-05 00:38:21.439164 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:38:21.440753 | orchestrator | 2025-05-05 00:38:21.441616 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-05 00:38:21.442909 | orchestrator | Monday 05 May 2025 00:38:21 +0000 (0:00:00.629) 0:00:08.441 ************ 2025-05-05 00:38:34.741645 | orchestrator | changed: [testbed-manager] 2025-05-05 00:38:34.741893 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:38:34.742750 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:38:34.743704 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:38:34.744821 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:38:34.745176 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:38:34.745917 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:38:34.746316 | orchestrator | 2025-05-05 00:38:34.746918 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-05 00:38:34.747599 | orchestrator | Monday 05 May 2025 00:38:34 +0000 (0:00:13.288) 0:00:21.730 ************ 2025-05-05 00:38:36.086519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:38:36.086682 | orchestrator | 2025-05-05 00:38:36.087427 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-05 00:38:36.087955 | orchestrator | Monday 05 May 2025 00:38:36 +0000 (0:00:01.350) 0:00:23.080 ************ 2025-05-05 00:38:38.089147 | orchestrator | changed: [testbed-manager] 2025-05-05 00:38:38.089463 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:38:38.089499 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:38:38.089515 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:38:38.089536 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:38:38.090274 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:38:38.091507 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:38:38.092919 | orchestrator | 2025-05-05 00:38:38.093596 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:38:38.094585 | orchestrator | 2025-05-05 00:38:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:38:38.094730 | orchestrator | 2025-05-05 00:38:38 | INFO  | Please wait and do not abort execution. 2025-05-05 00:38:38.096104 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:38:38.097263 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:38.097951 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:38.099027 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:38.099436 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:38.100367 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:38.101055 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:38.101227 | orchestrator | 2025-05-05 00:38:38.101880 | orchestrator | Monday 05 May 2025 00:38:38 +0000 (0:00:02.000) 0:00:25.081 ************ 2025-05-05 00:38:38.102298 | orchestrator | =============================================================================== 2025-05-05 00:38:38.103970 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.29s 2025-05-05 00:38:38.105097 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.22s 2025-05-05 00:38:38.105839 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.20s 2025-05-05 00:38:38.106306 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.00s 2025-05-05 00:38:38.106706 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2025-05-05 00:38:38.107330 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.27s 2025-05-05 00:38:38.107545 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2025-05-05 00:38:38.107993 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-05-05 00:38:38.108441 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.63s 2025-05-05 00:38:38.706554 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-05 00:38:40.062418 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-05 00:38:40.062628 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-05 00:38:40.062659 | orchestrator | + local max_attempts=60 2025-05-05 00:38:40.063019 | orchestrator | + local name=ceph-ansible 2025-05-05 00:38:40.063048 | orchestrator | + local attempt_num=1 2025-05-05 00:38:40.063068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-05 00:38:40.096704 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-05 00:38:40.096885 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-05 00:38:40.097068 | orchestrator | + local max_attempts=60 2025-05-05 00:38:40.097085 | orchestrator | + local name=kolla-ansible 2025-05-05 00:38:40.097093 | orchestrator | + local attempt_num=1 2025-05-05 00:38:40.097106 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-05 00:38:40.123891 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-05 00:38:40.124440 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-05 00:38:40.124474 | orchestrator | + local max_attempts=60 2025-05-05 00:38:40.124491 | orchestrator | + local name=osism-ansible 2025-05-05 00:38:40.124507 | orchestrator | + local attempt_num=1 2025-05-05 00:38:40.124529 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-05 00:38:40.152135 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-05 00:38:40.321514 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-05 00:38:40.321673 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-05 00:38:40.321727 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-05 00:38:40.507162 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-05 00:38:40.694341 | orchestrator | ARA in osism-ansible already disabled. 2025-05-05 00:38:40.868670 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-05 00:38:40.869981 | orchestrator | + osism apply gather-facts 2025-05-05 00:38:42.373132 | orchestrator | 2025-05-05 00:38:42 | INFO  | Task a93d409e-956a-4b0c-8cfb-5c6846996394 (gather-facts) was prepared for execution. 2025-05-05 00:38:45.507117 | orchestrator | 2025-05-05 00:38:42 | INFO  | It takes a moment until task a93d409e-956a-4b0c-8cfb-5c6846996394 (gather-facts) has been started and output is visible here. 2025-05-05 00:38:45.507364 | orchestrator | 2025-05-05 00:38:45.508326 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-05 00:38:45.511455 | orchestrator | 2025-05-05 00:38:45.512314 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-05 00:38:45.514190 | orchestrator | Monday 05 May 2025 00:38:45 +0000 (0:00:00.158) 0:00:00.158 ************ 2025-05-05 00:38:50.534388 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:38:50.536386 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:38:50.536725 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:38:50.537874 | orchestrator | ok: [testbed-manager] 2025-05-05 00:38:50.538592 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:38:50.540790 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:38:50.541026 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:38:50.541617 | orchestrator | 2025-05-05 00:38:50.542328 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-05 00:38:50.542848 | orchestrator | 2025-05-05 00:38:50.543489 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-05 00:38:50.544068 | orchestrator | Monday 05 May 2025 00:38:50 +0000 (0:00:05.033) 0:00:05.191 ************ 2025-05-05 00:38:50.685195 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:38:50.756177 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:38:50.839096 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:38:50.922992 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:38:50.999481 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:38:51.038618 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:38:51.038746 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:38:51.039437 | orchestrator | 2025-05-05 00:38:51.040978 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:38:51.041542 | orchestrator | 2025-05-05 00:38:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:38:51.041656 | orchestrator | 2025-05-05 00:38:51 | INFO  | Please wait and do not abort execution. 2025-05-05 00:38:51.042705 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:51.043215 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:51.043701 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:51.044939 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:51.045241 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:51.045684 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:51.046264 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 00:38:51.046974 | orchestrator | 2025-05-05 00:38:51.047784 | orchestrator | Monday 05 May 2025 00:38:51 +0000 (0:00:00.505) 0:00:05.696 ************ 2025-05-05 00:38:51.048301 | orchestrator | =============================================================================== 2025-05-05 00:38:51.048949 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.03s 2025-05-05 00:38:51.049522 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-05-05 00:38:51.521893 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-05 00:38:51.538131 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-05 00:38:51.555711 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-05 00:38:51.569618 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-05 00:38:51.590874 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-05 00:38:51.607519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-05 00:38:51.626150 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-05 00:38:51.646451 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-05 00:38:51.662358 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-05 00:38:51.679076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-05 00:38:51.696484 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-05 00:38:51.711144 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-05 00:38:51.728171 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-05 00:38:51.739658 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-05 00:38:51.751369 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-05 00:38:51.762834 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-05 00:38:51.774482 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-05 00:38:51.785548 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-05 00:38:51.796476 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-05 00:38:51.807287 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-05 00:38:51.817946 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-05 00:38:52.085121 | orchestrator | changed 2025-05-05 00:38:52.154207 | 2025-05-05 00:38:52.154375 | TASK [Deploy services] 2025-05-05 00:38:52.270366 | orchestrator | skipping: Conditional result was False 2025-05-05 00:38:52.290353 | 2025-05-05 00:38:52.290509 | TASK [Deploy in a nutshell] 2025-05-05 00:38:53.026287 | orchestrator | + set -e 2025-05-05 00:38:53.026550 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-05 00:38:53.026582 | orchestrator | ++ export INTERACTIVE=false 2025-05-05 00:38:53.026600 | orchestrator | ++ INTERACTIVE=false 2025-05-05 00:38:53.026645 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-05 00:38:53.026664 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-05 00:38:53.026680 | orchestrator | + source /opt/manager-vars.sh 2025-05-05 00:38:53.026705 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-05 00:38:53.026729 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-05 00:38:53.026745 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-05 00:38:53.026794 | orchestrator | ++ CEPH_VERSION=reef 2025-05-05 00:38:53.026809 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-05 00:38:53.026824 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-05 00:38:53.026838 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-05 00:38:53.026852 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-05 00:38:53.026867 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-05 00:38:53.026881 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-05 00:38:53.026895 | orchestrator | ++ export ARA=false 2025-05-05 00:38:53.026910 | orchestrator | ++ ARA=false 2025-05-05 00:38:53.026933 | orchestrator | ++ export TEMPEST=false 2025-05-05 00:38:53.027819 | orchestrator | ++ TEMPEST=false 2025-05-05 00:38:53.027856 | orchestrator | ++ export IS_ZUUL=true 2025-05-05 00:38:53.027883 | orchestrator | ++ IS_ZUUL=true 2025-05-05 00:38:53.027906 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-05 00:38:53.027925 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-05 00:38:53.027941 | orchestrator | ++ export EXTERNAL_API=false 2025-05-05 00:38:53.027958 | orchestrator | ++ EXTERNAL_API=false 2025-05-05 00:38:53.027972 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-05 00:38:53.027986 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-05 00:38:53.028001 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-05 00:38:53.028015 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-05 00:38:53.028029 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-05 00:38:53.028052 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-05 00:38:53.028067 | orchestrator | + echo 2025-05-05 00:38:53.028081 | orchestrator | 2025-05-05 00:38:53.028095 | orchestrator | # PULL IMAGES 2025-05-05 00:38:53.028109 | orchestrator | 2025-05-05 00:38:53.028123 | orchestrator | + echo '# PULL IMAGES' 2025-05-05 00:38:53.028137 | orchestrator | + echo 2025-05-05 00:38:53.028157 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-05 00:38:53.084124 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-05 00:38:54.442525 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-05 00:38:54.442741 | orchestrator | 2025-05-05 00:38:54 | INFO  | Trying to run play pull-images in environment custom 2025-05-05 00:38:54.497553 | orchestrator | 2025-05-05 00:38:54 | INFO  | Task 95efd1c4-80cb-49e6-b220-30d4d84cef1a (pull-images) was prepared for execution. 2025-05-05 00:38:57.563058 | orchestrator | 2025-05-05 00:38:54 | INFO  | It takes a moment until task 95efd1c4-80cb-49e6-b220-30d4d84cef1a (pull-images) has been started and output is visible here. 2025-05-05 00:38:57.563245 | orchestrator | 2025-05-05 00:38:57.563980 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-05 00:38:57.564341 | orchestrator | 2025-05-05 00:38:57.566878 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-05 00:38:57.567162 | orchestrator | Monday 05 May 2025 00:38:57 +0000 (0:00:00.141) 0:00:00.141 ************ 2025-05-05 00:39:35.951402 | orchestrator | changed: [testbed-manager] 2025-05-05 00:40:20.865404 | orchestrator | 2025-05-05 00:40:20.865570 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-05 00:40:20.865595 | orchestrator | Monday 05 May 2025 00:39:35 +0000 (0:00:38.388) 0:00:38.529 ************ 2025-05-05 00:40:20.865628 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-05 00:40:20.865810 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-05 00:40:20.865975 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-05 00:40:20.865998 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-05 00:40:20.866088 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-05 00:40:20.866111 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-05 00:40:20.866126 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-05 00:40:20.866144 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-05 00:40:20.866235 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-05 00:40:20.866330 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-05 00:40:20.870590 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-05 00:40:20.870670 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-05 00:40:20.870695 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-05 00:40:20.870793 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-05 00:40:20.870813 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-05 00:40:20.870828 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-05 00:40:20.870842 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-05 00:40:20.870856 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-05 00:40:20.870870 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-05 00:40:20.870883 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-05 00:40:20.870897 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-05 00:40:20.870911 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-05 00:40:20.870926 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-05 00:40:20.870941 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-05 00:40:20.870955 | orchestrator | 2025-05-05 00:40:20.870969 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:40:20.870984 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:40:20.871000 | orchestrator | 2025-05-05 00:40:20.871015 | orchestrator | Monday 05 May 2025 00:40:20 +0000 (0:00:44.914) 0:01:23.443 ************ 2025-05-05 00:40:20.871030 | orchestrator | 2025-05-05 00:40:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:40:20.871078 | orchestrator | 2025-05-05 00:40:20 | INFO  | Please wait and do not abort execution. 2025-05-05 00:40:20.871102 | orchestrator | =============================================================================== 2025-05-05 00:40:20.871156 | orchestrator | Pull other images ------------------------------------------------------ 44.91s 2025-05-05 00:40:20.871177 | orchestrator | Pull keystone image ---------------------------------------------------- 38.39s 2025-05-05 00:40:22.499657 | orchestrator | 2025-05-05 00:40:22 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-05 00:40:22.539157 | orchestrator | 2025-05-05 00:40:22 | INFO  | Task 3c7bd9e2-da7e-4dc1-8b2c-696443cffa28 (wipe-partitions) was prepared for execution. 2025-05-05 00:40:25.132372 | orchestrator | 2025-05-05 00:40:22 | INFO  | It takes a moment until task 3c7bd9e2-da7e-4dc1-8b2c-696443cffa28 (wipe-partitions) has been started and output is visible here. 2025-05-05 00:40:25.132524 | orchestrator | 2025-05-05 00:40:25.132647 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-05 00:40:25.132679 | orchestrator | 2025-05-05 00:40:25.132761 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-05 00:40:25.133011 | orchestrator | Monday 05 May 2025 00:40:25 +0000 (0:00:00.092) 0:00:00.092 ************ 2025-05-05 00:40:25.640063 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:40:25.640849 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:40:25.641599 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:40:25.644858 | orchestrator | 2025-05-05 00:40:25.648526 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-05 00:40:25.779324 | orchestrator | Monday 05 May 2025 00:40:25 +0000 (0:00:00.510) 0:00:00.602 ************ 2025-05-05 00:40:25.779446 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:25.863638 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:40:25.864104 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:40:25.864560 | orchestrator | 2025-05-05 00:40:25.868747 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-05 00:40:25.868826 | orchestrator | Monday 05 May 2025 00:40:25 +0000 (0:00:00.224) 0:00:00.827 ************ 2025-05-05 00:40:26.518104 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:40:26.519967 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:40:26.520150 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:40:26.520171 | orchestrator | 2025-05-05 00:40:26.520334 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-05 00:40:26.520450 | orchestrator | Monday 05 May 2025 00:40:26 +0000 (0:00:00.653) 0:00:01.480 ************ 2025-05-05 00:40:26.669583 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:26.749579 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:40:26.749750 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:40:26.751082 | orchestrator | 2025-05-05 00:40:26.751189 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-05 00:40:26.751217 | orchestrator | Monday 05 May 2025 00:40:26 +0000 (0:00:00.233) 0:00:01.713 ************ 2025-05-05 00:40:27.942871 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-05 00:40:27.943267 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-05 00:40:27.944512 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-05 00:40:27.946585 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-05 00:40:27.946637 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-05 00:40:27.948632 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-05 00:40:27.948958 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-05 00:40:27.949692 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-05 00:40:27.952596 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-05 00:40:29.186860 | orchestrator | 2025-05-05 00:40:29.186969 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-05 00:40:29.186987 | orchestrator | Monday 05 May 2025 00:40:27 +0000 (0:00:01.192) 0:00:02.906 ************ 2025-05-05 00:40:29.187012 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-05 00:40:29.187931 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-05 00:40:29.187957 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-05 00:40:29.187977 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-05 00:40:29.189001 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-05 00:40:29.189390 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-05 00:40:29.189965 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-05 00:40:29.191060 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-05 00:40:29.191540 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-05 00:40:29.193461 | orchestrator | 2025-05-05 00:40:31.564162 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-05 00:40:31.564370 | orchestrator | Monday 05 May 2025 00:40:29 +0000 (0:00:01.239) 0:00:04.145 ************ 2025-05-05 00:40:31.564920 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-05 00:40:31.565153 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-05 00:40:31.565180 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-05 00:40:31.565196 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-05 00:40:31.565227 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-05 00:40:31.565249 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-05 00:40:31.565462 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-05 00:40:31.565996 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-05 00:40:31.566274 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-05 00:40:31.566323 | orchestrator | 2025-05-05 00:40:31.566558 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-05 00:40:31.566866 | orchestrator | Monday 05 May 2025 00:40:31 +0000 (0:00:02.376) 0:00:06.522 ************ 2025-05-05 00:40:32.135993 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:40:32.138243 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:40:32.138270 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:40:32.138287 | orchestrator | 2025-05-05 00:40:32.138464 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-05 00:40:32.138486 | orchestrator | Monday 05 May 2025 00:40:32 +0000 (0:00:00.575) 0:00:07.097 ************ 2025-05-05 00:40:32.763878 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:40:32.764600 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:40:32.764781 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:40:32.764867 | orchestrator | 2025-05-05 00:40:32.765336 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:40:32.765743 | orchestrator | 2025-05-05 00:40:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:40:32.766651 | orchestrator | 2025-05-05 00:40:32 | INFO  | Please wait and do not abort execution. 2025-05-05 00:40:32.768136 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:32.768487 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:32.769170 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:32.769725 | orchestrator | 2025-05-05 00:40:32.771431 | orchestrator | Monday 05 May 2025 00:40:32 +0000 (0:00:00.626) 0:00:07.723 ************ 2025-05-05 00:40:32.772151 | orchestrator | =============================================================================== 2025-05-05 00:40:32.772580 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.38s 2025-05-05 00:40:32.773167 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.24s 2025-05-05 00:40:32.773772 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-05-05 00:40:32.774280 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.65s 2025-05-05 00:40:32.774868 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-05-05 00:40:32.775433 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-05-05 00:40:32.776061 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.51s 2025-05-05 00:40:32.776431 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-05-05 00:40:32.776973 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-05-05 00:40:34.710112 | orchestrator | 2025-05-05 00:40:34 | INFO  | Task 397095b6-3015-40bb-a16e-033a66544550 (facts) was prepared for execution. 2025-05-05 00:40:37.871136 | orchestrator | 2025-05-05 00:40:34 | INFO  | It takes a moment until task 397095b6-3015-40bb-a16e-033a66544550 (facts) has been started and output is visible here. 2025-05-05 00:40:37.871353 | orchestrator | 2025-05-05 00:40:37.871430 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-05 00:40:37.871454 | orchestrator | 2025-05-05 00:40:37.872851 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-05 00:40:37.872953 | orchestrator | Monday 05 May 2025 00:40:37 +0000 (0:00:00.223) 0:00:00.223 ************ 2025-05-05 00:40:39.000075 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:40:39.002419 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:40:39.002979 | orchestrator | ok: [testbed-manager] 2025-05-05 00:40:39.005492 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:40:39.006319 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:40:39.006609 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:40:39.007637 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:40:39.008994 | orchestrator | 2025-05-05 00:40:39.009087 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-05 00:40:39.009548 | orchestrator | Monday 05 May 2025 00:40:38 +0000 (0:00:01.126) 0:00:01.350 ************ 2025-05-05 00:40:39.158170 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:40:39.247063 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:40:39.323957 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:40:39.399891 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:40:39.473997 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:40.183149 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:40:40.187681 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:40:40.190165 | orchestrator | 2025-05-05 00:40:40.190220 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-05 00:40:40.190908 | orchestrator | 2025-05-05 00:40:40.192612 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-05 00:40:40.197829 | orchestrator | Monday 05 May 2025 00:40:40 +0000 (0:00:01.182) 0:00:02.533 ************ 2025-05-05 00:40:44.615137 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:40:44.629682 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:40:44.634443 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:40:44.637253 | orchestrator | ok: [testbed-manager] 2025-05-05 00:40:44.639981 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:40:44.642155 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:40:44.643093 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:40:44.644338 | orchestrator | 2025-05-05 00:40:44.646499 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-05 00:40:44.646829 | orchestrator | 2025-05-05 00:40:44.647509 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-05 00:40:44.647604 | orchestrator | Monday 05 May 2025 00:40:44 +0000 (0:00:04.405) 0:00:06.938 ************ 2025-05-05 00:40:44.904618 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:40:44.977464 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:40:45.054154 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:40:45.129654 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:40:45.207495 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:45.257837 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:40:45.258958 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:40:45.259725 | orchestrator | 2025-05-05 00:40:45.260783 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:40:45.261249 | orchestrator | 2025-05-05 00:40:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:40:45.262686 | orchestrator | 2025-05-05 00:40:45 | INFO  | Please wait and do not abort execution. 2025-05-05 00:40:45.262737 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:45.263214 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:45.264578 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:45.265507 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:45.266315 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:45.267079 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:45.268191 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:40:45.268680 | orchestrator | 2025-05-05 00:40:45.269289 | orchestrator | Monday 05 May 2025 00:40:45 +0000 (0:00:00.672) 0:00:07.610 ************ 2025-05-05 00:40:45.271403 | orchestrator | =============================================================================== 2025-05-05 00:40:47.489927 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.41s 2025-05-05 00:40:47.490114 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-05-05 00:40:47.490138 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-05-05 00:40:47.490154 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2025-05-05 00:40:47.490189 | orchestrator | 2025-05-05 00:40:47 | INFO  | Task e6469636-7190-4506-b874-ac6ac3573598 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-05 00:40:51.864107 | orchestrator | 2025-05-05 00:40:47 | INFO  | It takes a moment until task e6469636-7190-4506-b874-ac6ac3573598 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-05 00:40:51.864235 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-05 00:40:52.458631 | orchestrator | 2025-05-05 00:40:52.463386 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-05 00:40:52.463640 | orchestrator | 2025-05-05 00:40:52.466084 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-05 00:40:52.698553 | orchestrator | Monday 05 May 2025 00:40:52 +0000 (0:00:00.480) 0:00:00.480 ************ 2025-05-05 00:40:52.698740 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-05 00:40:52.699356 | orchestrator | 2025-05-05 00:40:52.699399 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-05 00:40:52.700499 | orchestrator | Monday 05 May 2025 00:40:52 +0000 (0:00:00.241) 0:00:00.722 ************ 2025-05-05 00:40:52.959980 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:40:52.960134 | orchestrator | 2025-05-05 00:40:52.960474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:52.960813 | orchestrator | Monday 05 May 2025 00:40:52 +0000 (0:00:00.260) 0:00:00.983 ************ 2025-05-05 00:40:53.529641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-05 00:40:53.529996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-05 00:40:53.530118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-05 00:40:53.531311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-05 00:40:53.532429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-05 00:40:53.534565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-05 00:40:53.534660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-05 00:40:53.538863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-05 00:40:53.538950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-05 00:40:53.540215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-05 00:40:53.540324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-05 00:40:53.541933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-05 00:40:53.543192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-05 00:40:53.543649 | orchestrator | 2025-05-05 00:40:53.543762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:53.544450 | orchestrator | Monday 05 May 2025 00:40:53 +0000 (0:00:00.568) 0:00:01.552 ************ 2025-05-05 00:40:53.735123 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:53.735767 | orchestrator | 2025-05-05 00:40:53.736171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:53.736526 | orchestrator | Monday 05 May 2025 00:40:53 +0000 (0:00:00.208) 0:00:01.760 ************ 2025-05-05 00:40:53.925776 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:53.926166 | orchestrator | 2025-05-05 00:40:53.926286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:53.926740 | orchestrator | Monday 05 May 2025 00:40:53 +0000 (0:00:00.191) 0:00:01.952 ************ 2025-05-05 00:40:54.139258 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:54.366442 | orchestrator | 2025-05-05 00:40:54.366553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:54.366572 | orchestrator | Monday 05 May 2025 00:40:54 +0000 (0:00:00.207) 0:00:02.159 ************ 2025-05-05 00:40:54.366603 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:54.367411 | orchestrator | 2025-05-05 00:40:54.367448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:54.372128 | orchestrator | Monday 05 May 2025 00:40:54 +0000 (0:00:00.226) 0:00:02.386 ************ 2025-05-05 00:40:54.577042 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:54.577645 | orchestrator | 2025-05-05 00:40:54.578267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:54.579520 | orchestrator | Monday 05 May 2025 00:40:54 +0000 (0:00:00.213) 0:00:02.599 ************ 2025-05-05 00:40:54.781279 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:54.782189 | orchestrator | 2025-05-05 00:40:54.783546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:54.784611 | orchestrator | Monday 05 May 2025 00:40:54 +0000 (0:00:00.207) 0:00:02.807 ************ 2025-05-05 00:40:54.993635 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:54.996667 | orchestrator | 2025-05-05 00:40:54.996945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:54.997015 | orchestrator | Monday 05 May 2025 00:40:54 +0000 (0:00:00.210) 0:00:03.018 ************ 2025-05-05 00:40:55.175993 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:55.179356 | orchestrator | 2025-05-05 00:40:55.179845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:55.183826 | orchestrator | Monday 05 May 2025 00:40:55 +0000 (0:00:00.183) 0:00:03.201 ************ 2025-05-05 00:40:55.810284 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4) 2025-05-05 00:40:55.811415 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4) 2025-05-05 00:40:55.813476 | orchestrator | 2025-05-05 00:40:55.815154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:55.816534 | orchestrator | Monday 05 May 2025 00:40:55 +0000 (0:00:00.631) 0:00:03.832 ************ 2025-05-05 00:40:56.715385 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7) 2025-05-05 00:40:56.715616 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7) 2025-05-05 00:40:56.717019 | orchestrator | 2025-05-05 00:40:56.718189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:56.718988 | orchestrator | Monday 05 May 2025 00:40:56 +0000 (0:00:00.905) 0:00:04.738 ************ 2025-05-05 00:40:57.201069 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6) 2025-05-05 00:40:57.203438 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6) 2025-05-05 00:40:57.205462 | orchestrator | 2025-05-05 00:40:57.205839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:57.206258 | orchestrator | Monday 05 May 2025 00:40:57 +0000 (0:00:00.484) 0:00:05.222 ************ 2025-05-05 00:40:57.632209 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8) 2025-05-05 00:40:57.634870 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8) 2025-05-05 00:40:57.635557 | orchestrator | 2025-05-05 00:40:57.635611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:40:57.636544 | orchestrator | Monday 05 May 2025 00:40:57 +0000 (0:00:00.433) 0:00:05.655 ************ 2025-05-05 00:40:57.960468 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-05 00:40:57.963174 | orchestrator | 2025-05-05 00:40:57.963441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:40:57.964078 | orchestrator | Monday 05 May 2025 00:40:57 +0000 (0:00:00.328) 0:00:05.984 ************ 2025-05-05 00:40:58.394227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-05 00:40:58.395119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-05 00:40:58.396558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-05 00:40:58.397221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-05 00:40:58.397252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-05 00:40:58.397898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-05 00:40:58.398566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-05 00:40:58.399470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-05 00:40:58.399960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-05 00:40:58.400361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-05 00:40:58.401339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-05 00:40:58.401532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-05 00:40:58.402083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-05 00:40:58.402565 | orchestrator | 2025-05-05 00:40:58.403039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:40:58.403808 | orchestrator | Monday 05 May 2025 00:40:58 +0000 (0:00:00.430) 0:00:06.415 ************ 2025-05-05 00:40:58.600617 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:58.602196 | orchestrator | 2025-05-05 00:40:58.602229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:40:58.603432 | orchestrator | Monday 05 May 2025 00:40:58 +0000 (0:00:00.210) 0:00:06.625 ************ 2025-05-05 00:40:58.823789 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:58.824004 | orchestrator | 2025-05-05 00:40:58.826990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:40:58.829585 | orchestrator | Monday 05 May 2025 00:40:58 +0000 (0:00:00.218) 0:00:06.844 ************ 2025-05-05 00:40:59.086784 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:59.087769 | orchestrator | 2025-05-05 00:40:59.088045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:40:59.088936 | orchestrator | Monday 05 May 2025 00:40:59 +0000 (0:00:00.266) 0:00:07.110 ************ 2025-05-05 00:40:59.291875 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:59.292047 | orchestrator | 2025-05-05 00:40:59.292367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:40:59.293444 | orchestrator | Monday 05 May 2025 00:40:59 +0000 (0:00:00.200) 0:00:07.310 ************ 2025-05-05 00:40:59.890897 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:40:59.892091 | orchestrator | 2025-05-05 00:40:59.893133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:40:59.898101 | orchestrator | Monday 05 May 2025 00:40:59 +0000 (0:00:00.604) 0:00:07.915 ************ 2025-05-05 00:41:00.113217 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:00.113648 | orchestrator | 2025-05-05 00:41:00.115660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:00.342308 | orchestrator | Monday 05 May 2025 00:41:00 +0000 (0:00:00.222) 0:00:08.137 ************ 2025-05-05 00:41:00.342403 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:00.344045 | orchestrator | 2025-05-05 00:41:00.344480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:00.346918 | orchestrator | Monday 05 May 2025 00:41:00 +0000 (0:00:00.227) 0:00:08.365 ************ 2025-05-05 00:41:00.604983 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:00.605338 | orchestrator | 2025-05-05 00:41:00.607423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:00.611879 | orchestrator | Monday 05 May 2025 00:41:00 +0000 (0:00:00.265) 0:00:08.630 ************ 2025-05-05 00:41:01.356189 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-05 00:41:01.356634 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-05 00:41:01.356676 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-05 00:41:01.357181 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-05 00:41:01.357637 | orchestrator | 2025-05-05 00:41:01.358180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:01.358827 | orchestrator | Monday 05 May 2025 00:41:01 +0000 (0:00:00.746) 0:00:09.377 ************ 2025-05-05 00:41:01.585267 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:01.588183 | orchestrator | 2025-05-05 00:41:01.590338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:01.590896 | orchestrator | Monday 05 May 2025 00:41:01 +0000 (0:00:00.233) 0:00:09.611 ************ 2025-05-05 00:41:01.822540 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:01.825191 | orchestrator | 2025-05-05 00:41:01.828335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:01.828899 | orchestrator | Monday 05 May 2025 00:41:01 +0000 (0:00:00.236) 0:00:09.847 ************ 2025-05-05 00:41:02.101578 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:02.101890 | orchestrator | 2025-05-05 00:41:02.103096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:02.103379 | orchestrator | Monday 05 May 2025 00:41:02 +0000 (0:00:00.277) 0:00:10.125 ************ 2025-05-05 00:41:02.268387 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:02.269146 | orchestrator | 2025-05-05 00:41:02.269376 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-05 00:41:02.269880 | orchestrator | Monday 05 May 2025 00:41:02 +0000 (0:00:00.169) 0:00:10.295 ************ 2025-05-05 00:41:02.421017 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-05 00:41:02.421939 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-05 00:41:02.421999 | orchestrator | 2025-05-05 00:41:02.529920 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-05 00:41:02.530098 | orchestrator | Monday 05 May 2025 00:41:02 +0000 (0:00:00.150) 0:00:10.445 ************ 2025-05-05 00:41:02.530130 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:02.530198 | orchestrator | 2025-05-05 00:41:02.530503 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-05 00:41:02.531268 | orchestrator | Monday 05 May 2025 00:41:02 +0000 (0:00:00.111) 0:00:10.557 ************ 2025-05-05 00:41:02.790968 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:02.794850 | orchestrator | 2025-05-05 00:41:02.795012 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-05 00:41:02.795604 | orchestrator | Monday 05 May 2025 00:41:02 +0000 (0:00:00.256) 0:00:10.813 ************ 2025-05-05 00:41:02.906000 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:02.906242 | orchestrator | 2025-05-05 00:41:02.906683 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-05 00:41:02.906748 | orchestrator | Monday 05 May 2025 00:41:02 +0000 (0:00:00.100) 0:00:10.913 ************ 2025-05-05 00:41:03.024980 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:41:03.025139 | orchestrator | 2025-05-05 00:41:03.025373 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-05 00:41:03.025762 | orchestrator | Monday 05 May 2025 00:41:03 +0000 (0:00:00.136) 0:00:11.050 ************ 2025-05-05 00:41:03.262099 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b45d62aa-c8ca-51ec-bff2-6c96656db621'}}) 2025-05-05 00:41:03.262511 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac6a629e-412f-52b8-abc2-7f30e47159be'}}) 2025-05-05 00:41:03.262977 | orchestrator | 2025-05-05 00:41:03.263248 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-05 00:41:03.263800 | orchestrator | Monday 05 May 2025 00:41:03 +0000 (0:00:00.220) 0:00:11.270 ************ 2025-05-05 00:41:03.447595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b45d62aa-c8ca-51ec-bff2-6c96656db621'}})  2025-05-05 00:41:03.451012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac6a629e-412f-52b8-abc2-7f30e47159be'}})  2025-05-05 00:41:03.452210 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:03.453459 | orchestrator | 2025-05-05 00:41:03.459011 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-05 00:41:03.463796 | orchestrator | Monday 05 May 2025 00:41:03 +0000 (0:00:00.185) 0:00:11.456 ************ 2025-05-05 00:41:03.618007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b45d62aa-c8ca-51ec-bff2-6c96656db621'}})  2025-05-05 00:41:03.618744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac6a629e-412f-52b8-abc2-7f30e47159be'}})  2025-05-05 00:41:03.618889 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:03.618967 | orchestrator | 2025-05-05 00:41:03.619291 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-05 00:41:03.619640 | orchestrator | Monday 05 May 2025 00:41:03 +0000 (0:00:00.187) 0:00:11.644 ************ 2025-05-05 00:41:03.770649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b45d62aa-c8ca-51ec-bff2-6c96656db621'}})  2025-05-05 00:41:03.770865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac6a629e-412f-52b8-abc2-7f30e47159be'}})  2025-05-05 00:41:03.771760 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:03.771949 | orchestrator | 2025-05-05 00:41:03.772250 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-05 00:41:03.772524 | orchestrator | Monday 05 May 2025 00:41:03 +0000 (0:00:00.152) 0:00:11.796 ************ 2025-05-05 00:41:03.898125 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:41:03.898291 | orchestrator | 2025-05-05 00:41:03.898321 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-05 00:41:03.903579 | orchestrator | Monday 05 May 2025 00:41:03 +0000 (0:00:00.124) 0:00:11.921 ************ 2025-05-05 00:41:03.995215 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:41:03.995606 | orchestrator | 2025-05-05 00:41:03.995831 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-05 00:41:03.996822 | orchestrator | Monday 05 May 2025 00:41:03 +0000 (0:00:00.100) 0:00:12.022 ************ 2025-05-05 00:41:04.128046 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:04.128852 | orchestrator | 2025-05-05 00:41:04.129088 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-05 00:41:04.129464 | orchestrator | Monday 05 May 2025 00:41:04 +0000 (0:00:00.127) 0:00:12.149 ************ 2025-05-05 00:41:04.246769 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:04.249309 | orchestrator | 2025-05-05 00:41:04.249349 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-05 00:41:04.507061 | orchestrator | Monday 05 May 2025 00:41:04 +0000 (0:00:00.120) 0:00:12.270 ************ 2025-05-05 00:41:04.507180 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:04.507660 | orchestrator | 2025-05-05 00:41:04.507817 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-05 00:41:04.507847 | orchestrator | Monday 05 May 2025 00:41:04 +0000 (0:00:00.261) 0:00:12.532 ************ 2025-05-05 00:41:04.627373 | orchestrator | ok: [testbed-node-3] => { 2025-05-05 00:41:04.627512 | orchestrator |  "ceph_osd_devices": { 2025-05-05 00:41:04.628156 | orchestrator |  "sdb": { 2025-05-05 00:41:04.628835 | orchestrator |  "osd_lvm_uuid": "b45d62aa-c8ca-51ec-bff2-6c96656db621" 2025-05-05 00:41:04.630797 | orchestrator |  }, 2025-05-05 00:41:04.631393 | orchestrator |  "sdc": { 2025-05-05 00:41:04.631425 | orchestrator |  "osd_lvm_uuid": "ac6a629e-412f-52b8-abc2-7f30e47159be" 2025-05-05 00:41:04.632610 | orchestrator |  } 2025-05-05 00:41:04.632645 | orchestrator |  } 2025-05-05 00:41:04.632666 | orchestrator | } 2025-05-05 00:41:04.632965 | orchestrator | 2025-05-05 00:41:04.632996 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-05 00:41:04.634494 | orchestrator | Monday 05 May 2025 00:41:04 +0000 (0:00:00.121) 0:00:12.653 ************ 2025-05-05 00:41:04.762295 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:04.762480 | orchestrator | 2025-05-05 00:41:04.762512 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-05 00:41:04.762924 | orchestrator | Monday 05 May 2025 00:41:04 +0000 (0:00:00.132) 0:00:12.785 ************ 2025-05-05 00:41:04.891548 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:04.892590 | orchestrator | 2025-05-05 00:41:04.892862 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-05 00:41:04.896478 | orchestrator | Monday 05 May 2025 00:41:04 +0000 (0:00:00.132) 0:00:12.918 ************ 2025-05-05 00:41:05.004899 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:41:05.005055 | orchestrator | 2025-05-05 00:41:05.005079 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-05 00:41:05.005104 | orchestrator | Monday 05 May 2025 00:41:05 +0000 (0:00:00.111) 0:00:13.029 ************ 2025-05-05 00:41:05.252944 | orchestrator | changed: [testbed-node-3] => { 2025-05-05 00:41:05.253438 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-05 00:41:05.254548 | orchestrator |  "ceph_osd_devices": { 2025-05-05 00:41:05.256397 | orchestrator |  "sdb": { 2025-05-05 00:41:05.256971 | orchestrator |  "osd_lvm_uuid": "b45d62aa-c8ca-51ec-bff2-6c96656db621" 2025-05-05 00:41:05.257738 | orchestrator |  }, 2025-05-05 00:41:05.258562 | orchestrator |  "sdc": { 2025-05-05 00:41:05.259019 | orchestrator |  "osd_lvm_uuid": "ac6a629e-412f-52b8-abc2-7f30e47159be" 2025-05-05 00:41:05.259805 | orchestrator |  } 2025-05-05 00:41:05.262515 | orchestrator |  }, 2025-05-05 00:41:05.262979 | orchestrator |  "lvm_volumes": [ 2025-05-05 00:41:05.263455 | orchestrator |  { 2025-05-05 00:41:05.263829 | orchestrator |  "data": "osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621", 2025-05-05 00:41:05.264184 | orchestrator |  "data_vg": "ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621" 2025-05-05 00:41:05.264529 | orchestrator |  }, 2025-05-05 00:41:05.264985 | orchestrator |  { 2025-05-05 00:41:05.267242 | orchestrator |  "data": "osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be", 2025-05-05 00:41:05.267980 | orchestrator |  "data_vg": "ceph-ac6a629e-412f-52b8-abc2-7f30e47159be" 2025-05-05 00:41:05.269898 | orchestrator |  } 2025-05-05 00:41:05.271407 | orchestrator |  ] 2025-05-05 00:41:05.272398 | orchestrator |  } 2025-05-05 00:41:05.273451 | orchestrator | } 2025-05-05 00:41:05.274391 | orchestrator | 2025-05-05 00:41:05.275343 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-05 00:41:05.276286 | orchestrator | Monday 05 May 2025 00:41:05 +0000 (0:00:00.249) 0:00:13.278 ************ 2025-05-05 00:41:06.999260 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-05 00:41:06.999412 | orchestrator | 2025-05-05 00:41:06.999441 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-05 00:41:07.002593 | orchestrator | 2025-05-05 00:41:07.002759 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-05 00:41:07.002839 | orchestrator | Monday 05 May 2025 00:41:06 +0000 (0:00:01.744) 0:00:15.023 ************ 2025-05-05 00:41:07.242396 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-05 00:41:07.243418 | orchestrator | 2025-05-05 00:41:07.243466 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-05 00:41:07.243688 | orchestrator | Monday 05 May 2025 00:41:07 +0000 (0:00:00.245) 0:00:15.269 ************ 2025-05-05 00:41:07.442759 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:41:07.753684 | orchestrator | 2025-05-05 00:41:07.753836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:07.753857 | orchestrator | Monday 05 May 2025 00:41:07 +0000 (0:00:00.197) 0:00:15.466 ************ 2025-05-05 00:41:07.753887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-05 00:41:07.755122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-05 00:41:07.756414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-05 00:41:07.757283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-05 00:41:07.758201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-05 00:41:07.759416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-05 00:41:07.760482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-05 00:41:07.761994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-05 00:41:07.762960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-05 00:41:07.763357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-05 00:41:07.764010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-05 00:41:07.764944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-05 00:41:07.766436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-05 00:41:07.767142 | orchestrator | 2025-05-05 00:41:07.767176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:07.767713 | orchestrator | Monday 05 May 2025 00:41:07 +0000 (0:00:00.313) 0:00:15.780 ************ 2025-05-05 00:41:07.968625 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:07.969346 | orchestrator | 2025-05-05 00:41:07.969399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:07.969874 | orchestrator | Monday 05 May 2025 00:41:07 +0000 (0:00:00.212) 0:00:15.992 ************ 2025-05-05 00:41:08.162901 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:08.163889 | orchestrator | 2025-05-05 00:41:08.164282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:08.168441 | orchestrator | Monday 05 May 2025 00:41:08 +0000 (0:00:00.196) 0:00:16.188 ************ 2025-05-05 00:41:08.373847 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:08.374269 | orchestrator | 2025-05-05 00:41:08.375807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:08.375935 | orchestrator | Monday 05 May 2025 00:41:08 +0000 (0:00:00.210) 0:00:16.399 ************ 2025-05-05 00:41:08.565932 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:08.952350 | orchestrator | 2025-05-05 00:41:08.952464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:08.952484 | orchestrator | Monday 05 May 2025 00:41:08 +0000 (0:00:00.189) 0:00:16.588 ************ 2025-05-05 00:41:08.952513 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:08.957073 | orchestrator | 2025-05-05 00:41:08.957168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:08.957285 | orchestrator | Monday 05 May 2025 00:41:08 +0000 (0:00:00.390) 0:00:16.979 ************ 2025-05-05 00:41:09.141952 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:09.143976 | orchestrator | 2025-05-05 00:41:09.144142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:09.144172 | orchestrator | Monday 05 May 2025 00:41:09 +0000 (0:00:00.188) 0:00:17.167 ************ 2025-05-05 00:41:09.329188 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:09.331352 | orchestrator | 2025-05-05 00:41:09.331507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:09.331591 | orchestrator | Monday 05 May 2025 00:41:09 +0000 (0:00:00.185) 0:00:17.352 ************ 2025-05-05 00:41:09.525597 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:09.525801 | orchestrator | 2025-05-05 00:41:09.526813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:09.526894 | orchestrator | Monday 05 May 2025 00:41:09 +0000 (0:00:00.198) 0:00:17.551 ************ 2025-05-05 00:41:09.913413 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b) 2025-05-05 00:41:09.913621 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b) 2025-05-05 00:41:09.913896 | orchestrator | 2025-05-05 00:41:09.913933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:09.914170 | orchestrator | Monday 05 May 2025 00:41:09 +0000 (0:00:00.388) 0:00:17.940 ************ 2025-05-05 00:41:10.290478 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164) 2025-05-05 00:41:10.290814 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164) 2025-05-05 00:41:10.290868 | orchestrator | 2025-05-05 00:41:10.290893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:10.664774 | orchestrator | Monday 05 May 2025 00:41:10 +0000 (0:00:00.375) 0:00:18.315 ************ 2025-05-05 00:41:10.664904 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170) 2025-05-05 00:41:10.664968 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170) 2025-05-05 00:41:10.664985 | orchestrator | 2025-05-05 00:41:10.665002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:10.665349 | orchestrator | Monday 05 May 2025 00:41:10 +0000 (0:00:00.374) 0:00:18.689 ************ 2025-05-05 00:41:11.053565 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e) 2025-05-05 00:41:11.054867 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e) 2025-05-05 00:41:11.056690 | orchestrator | 2025-05-05 00:41:11.059786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:11.347725 | orchestrator | Monday 05 May 2025 00:41:11 +0000 (0:00:00.390) 0:00:19.080 ************ 2025-05-05 00:41:11.347854 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-05 00:41:11.347924 | orchestrator | 2025-05-05 00:41:11.348444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:11.348668 | orchestrator | Monday 05 May 2025 00:41:11 +0000 (0:00:00.293) 0:00:19.373 ************ 2025-05-05 00:41:11.803254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-05 00:41:11.803449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-05 00:41:11.805085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-05 00:41:11.805589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-05 00:41:11.808842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-05 00:41:11.811360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-05 00:41:11.811394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-05 00:41:11.811694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-05 00:41:11.812763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-05 00:41:11.813632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-05 00:41:11.814443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-05 00:41:11.820819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-05 00:41:12.038208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-05 00:41:12.038349 | orchestrator | 2025-05-05 00:41:12.038368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:12.038384 | orchestrator | Monday 05 May 2025 00:41:11 +0000 (0:00:00.455) 0:00:19.829 ************ 2025-05-05 00:41:12.038416 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:12.041099 | orchestrator | 2025-05-05 00:41:12.260320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:12.260466 | orchestrator | Monday 05 May 2025 00:41:12 +0000 (0:00:00.232) 0:00:20.061 ************ 2025-05-05 00:41:12.260545 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:12.260622 | orchestrator | 2025-05-05 00:41:12.260648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:12.261593 | orchestrator | Monday 05 May 2025 00:41:12 +0000 (0:00:00.221) 0:00:20.283 ************ 2025-05-05 00:41:12.468263 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:12.468499 | orchestrator | 2025-05-05 00:41:12.469191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:12.469815 | orchestrator | Monday 05 May 2025 00:41:12 +0000 (0:00:00.209) 0:00:20.493 ************ 2025-05-05 00:41:12.704021 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:12.704401 | orchestrator | 2025-05-05 00:41:12.705973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:12.708612 | orchestrator | Monday 05 May 2025 00:41:12 +0000 (0:00:00.233) 0:00:20.726 ************ 2025-05-05 00:41:12.916905 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:12.918411 | orchestrator | 2025-05-05 00:41:12.919421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:12.920203 | orchestrator | Monday 05 May 2025 00:41:12 +0000 (0:00:00.215) 0:00:20.942 ************ 2025-05-05 00:41:13.133178 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:13.133786 | orchestrator | 2025-05-05 00:41:13.133982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:13.134977 | orchestrator | Monday 05 May 2025 00:41:13 +0000 (0:00:00.214) 0:00:21.157 ************ 2025-05-05 00:41:13.345993 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:13.346244 | orchestrator | 2025-05-05 00:41:13.347592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:13.354481 | orchestrator | Monday 05 May 2025 00:41:13 +0000 (0:00:00.212) 0:00:21.369 ************ 2025-05-05 00:41:13.601651 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:13.603027 | orchestrator | 2025-05-05 00:41:13.603200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:13.604052 | orchestrator | Monday 05 May 2025 00:41:13 +0000 (0:00:00.253) 0:00:21.622 ************ 2025-05-05 00:41:14.530650 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-05 00:41:14.531236 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-05 00:41:14.532934 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-05 00:41:14.536800 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-05 00:41:14.538094 | orchestrator | 2025-05-05 00:41:14.539281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:14.541494 | orchestrator | Monday 05 May 2025 00:41:14 +0000 (0:00:00.932) 0:00:22.554 ************ 2025-05-05 00:41:15.308531 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:15.309685 | orchestrator | 2025-05-05 00:41:15.310678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:15.312076 | orchestrator | Monday 05 May 2025 00:41:15 +0000 (0:00:00.777) 0:00:23.332 ************ 2025-05-05 00:41:15.526630 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:15.529653 | orchestrator | 2025-05-05 00:41:15.529759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:15.531447 | orchestrator | Monday 05 May 2025 00:41:15 +0000 (0:00:00.219) 0:00:23.552 ************ 2025-05-05 00:41:15.768193 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:15.769796 | orchestrator | 2025-05-05 00:41:15.774597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:15.774865 | orchestrator | Monday 05 May 2025 00:41:15 +0000 (0:00:00.240) 0:00:23.792 ************ 2025-05-05 00:41:16.026191 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:16.029448 | orchestrator | 2025-05-05 00:41:16.226154 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-05 00:41:16.226274 | orchestrator | Monday 05 May 2025 00:41:16 +0000 (0:00:00.258) 0:00:24.051 ************ 2025-05-05 00:41:16.226310 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-05 00:41:16.227346 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-05 00:41:16.229164 | orchestrator | 2025-05-05 00:41:16.232229 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-05 00:41:16.375372 | orchestrator | Monday 05 May 2025 00:41:16 +0000 (0:00:00.199) 0:00:24.251 ************ 2025-05-05 00:41:16.375511 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:16.375852 | orchestrator | 2025-05-05 00:41:16.376888 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-05 00:41:16.377267 | orchestrator | Monday 05 May 2025 00:41:16 +0000 (0:00:00.149) 0:00:24.400 ************ 2025-05-05 00:41:16.519625 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:16.520289 | orchestrator | 2025-05-05 00:41:16.521174 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-05 00:41:16.522118 | orchestrator | Monday 05 May 2025 00:41:16 +0000 (0:00:00.144) 0:00:24.545 ************ 2025-05-05 00:41:16.672268 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:16.672451 | orchestrator | 2025-05-05 00:41:16.672480 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-05 00:41:16.672814 | orchestrator | Monday 05 May 2025 00:41:16 +0000 (0:00:00.150) 0:00:24.695 ************ 2025-05-05 00:41:16.862388 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:41:16.863302 | orchestrator | 2025-05-05 00:41:16.864183 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-05 00:41:16.865343 | orchestrator | Monday 05 May 2025 00:41:16 +0000 (0:00:00.190) 0:00:24.886 ************ 2025-05-05 00:41:17.039330 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}}) 2025-05-05 00:41:17.039755 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}}) 2025-05-05 00:41:17.040993 | orchestrator | 2025-05-05 00:41:17.041980 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-05 00:41:17.042650 | orchestrator | Monday 05 May 2025 00:41:17 +0000 (0:00:00.178) 0:00:25.064 ************ 2025-05-05 00:41:17.241333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}})  2025-05-05 00:41:17.242915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}})  2025-05-05 00:41:17.244660 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:17.248950 | orchestrator | 2025-05-05 00:41:17.249466 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-05 00:41:17.249500 | orchestrator | Monday 05 May 2025 00:41:17 +0000 (0:00:00.201) 0:00:25.265 ************ 2025-05-05 00:41:17.612295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}})  2025-05-05 00:41:17.613285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}})  2025-05-05 00:41:17.615136 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:17.615660 | orchestrator | 2025-05-05 00:41:17.616865 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-05 00:41:17.618006 | orchestrator | Monday 05 May 2025 00:41:17 +0000 (0:00:00.369) 0:00:25.635 ************ 2025-05-05 00:41:17.806381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}})  2025-05-05 00:41:17.807647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}})  2025-05-05 00:41:17.809908 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:17.811374 | orchestrator | 2025-05-05 00:41:17.812128 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-05 00:41:17.812632 | orchestrator | Monday 05 May 2025 00:41:17 +0000 (0:00:00.195) 0:00:25.831 ************ 2025-05-05 00:41:17.963611 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:41:17.966008 | orchestrator | 2025-05-05 00:41:17.969364 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-05 00:41:17.972824 | orchestrator | Monday 05 May 2025 00:41:17 +0000 (0:00:00.156) 0:00:25.988 ************ 2025-05-05 00:41:18.137332 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:41:18.138397 | orchestrator | 2025-05-05 00:41:18.138511 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-05 00:41:18.138541 | orchestrator | Monday 05 May 2025 00:41:18 +0000 (0:00:00.173) 0:00:26.161 ************ 2025-05-05 00:41:18.284889 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:18.285902 | orchestrator | 2025-05-05 00:41:18.287435 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-05 00:41:18.287908 | orchestrator | Monday 05 May 2025 00:41:18 +0000 (0:00:00.147) 0:00:26.309 ************ 2025-05-05 00:41:18.426763 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:18.427162 | orchestrator | 2025-05-05 00:41:18.427558 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-05 00:41:18.428022 | orchestrator | Monday 05 May 2025 00:41:18 +0000 (0:00:00.137) 0:00:26.447 ************ 2025-05-05 00:41:18.571786 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:18.571976 | orchestrator | 2025-05-05 00:41:18.575784 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-05 00:41:18.576214 | orchestrator | Monday 05 May 2025 00:41:18 +0000 (0:00:00.148) 0:00:26.595 ************ 2025-05-05 00:41:18.727747 | orchestrator | ok: [testbed-node-4] => { 2025-05-05 00:41:18.728235 | orchestrator |  "ceph_osd_devices": { 2025-05-05 00:41:18.729575 | orchestrator |  "sdb": { 2025-05-05 00:41:18.734842 | orchestrator |  "osd_lvm_uuid": "09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f" 2025-05-05 00:41:18.735061 | orchestrator |  }, 2025-05-05 00:41:18.737790 | orchestrator |  "sdc": { 2025-05-05 00:41:18.738010 | orchestrator |  "osd_lvm_uuid": "1dbbf782-cf90-597f-b1d9-d891fd7b35f3" 2025-05-05 00:41:18.738543 | orchestrator |  } 2025-05-05 00:41:18.739258 | orchestrator |  } 2025-05-05 00:41:18.739797 | orchestrator | } 2025-05-05 00:41:18.740231 | orchestrator | 2025-05-05 00:41:18.740643 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-05 00:41:18.741044 | orchestrator | Monday 05 May 2025 00:41:18 +0000 (0:00:00.156) 0:00:26.752 ************ 2025-05-05 00:41:18.871976 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:18.872345 | orchestrator | 2025-05-05 00:41:18.873306 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-05 00:41:18.874233 | orchestrator | Monday 05 May 2025 00:41:18 +0000 (0:00:00.144) 0:00:26.897 ************ 2025-05-05 00:41:19.006201 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:19.006601 | orchestrator | 2025-05-05 00:41:19.007915 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-05 00:41:19.008655 | orchestrator | Monday 05 May 2025 00:41:19 +0000 (0:00:00.133) 0:00:27.031 ************ 2025-05-05 00:41:19.145900 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:41:19.146655 | orchestrator | 2025-05-05 00:41:19.147276 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-05 00:41:19.150123 | orchestrator | Monday 05 May 2025 00:41:19 +0000 (0:00:00.139) 0:00:27.171 ************ 2025-05-05 00:41:19.604581 | orchestrator | changed: [testbed-node-4] => { 2025-05-05 00:41:19.606299 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-05 00:41:19.607295 | orchestrator |  "ceph_osd_devices": { 2025-05-05 00:41:19.608403 | orchestrator |  "sdb": { 2025-05-05 00:41:19.611445 | orchestrator |  "osd_lvm_uuid": "09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f" 2025-05-05 00:41:19.612238 | orchestrator |  }, 2025-05-05 00:41:19.613060 | orchestrator |  "sdc": { 2025-05-05 00:41:19.613303 | orchestrator |  "osd_lvm_uuid": "1dbbf782-cf90-597f-b1d9-d891fd7b35f3" 2025-05-05 00:41:19.613934 | orchestrator |  } 2025-05-05 00:41:19.614077 | orchestrator |  }, 2025-05-05 00:41:19.614646 | orchestrator |  "lvm_volumes": [ 2025-05-05 00:41:19.614861 | orchestrator |  { 2025-05-05 00:41:19.615265 | orchestrator |  "data": "osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f", 2025-05-05 00:41:19.615654 | orchestrator |  "data_vg": "ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f" 2025-05-05 00:41:19.616820 | orchestrator |  }, 2025-05-05 00:41:19.617571 | orchestrator |  { 2025-05-05 00:41:19.618332 | orchestrator |  "data": "osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3", 2025-05-05 00:41:19.618753 | orchestrator |  "data_vg": "ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3" 2025-05-05 00:41:19.619088 | orchestrator |  } 2025-05-05 00:41:19.619585 | orchestrator |  ] 2025-05-05 00:41:19.620061 | orchestrator |  } 2025-05-05 00:41:19.620275 | orchestrator | } 2025-05-05 00:41:19.620902 | orchestrator | 2025-05-05 00:41:19.621218 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-05 00:41:19.622106 | orchestrator | Monday 05 May 2025 00:41:19 +0000 (0:00:00.453) 0:00:27.624 ************ 2025-05-05 00:41:20.948926 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-05 00:41:20.949600 | orchestrator | 2025-05-05 00:41:20.949656 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-05 00:41:20.950330 | orchestrator | 2025-05-05 00:41:20.951275 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-05 00:41:20.954524 | orchestrator | Monday 05 May 2025 00:41:20 +0000 (0:00:01.348) 0:00:28.972 ************ 2025-05-05 00:41:21.200580 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-05 00:41:21.200841 | orchestrator | 2025-05-05 00:41:21.202729 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-05 00:41:21.203810 | orchestrator | Monday 05 May 2025 00:41:21 +0000 (0:00:00.253) 0:00:29.225 ************ 2025-05-05 00:41:21.447568 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:41:21.447894 | orchestrator | 2025-05-05 00:41:21.451858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:22.151486 | orchestrator | Monday 05 May 2025 00:41:21 +0000 (0:00:00.247) 0:00:29.472 ************ 2025-05-05 00:41:22.151629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-05 00:41:22.152105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-05 00:41:22.152764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-05 00:41:22.153657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-05 00:41:22.154962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-05 00:41:22.155842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-05 00:41:22.158640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-05 00:41:22.158757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-05 00:41:22.158777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-05 00:41:22.158791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-05 00:41:22.158804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-05 00:41:22.158822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-05 00:41:22.159628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-05 00:41:22.160536 | orchestrator | 2025-05-05 00:41:22.161129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:22.162257 | orchestrator | Monday 05 May 2025 00:41:22 +0000 (0:00:00.703) 0:00:30.176 ************ 2025-05-05 00:41:22.358905 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:22.359557 | orchestrator | 2025-05-05 00:41:22.360327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:22.361115 | orchestrator | Monday 05 May 2025 00:41:22 +0000 (0:00:00.207) 0:00:30.383 ************ 2025-05-05 00:41:22.572354 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:22.577135 | orchestrator | 2025-05-05 00:41:22.768473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:22.768593 | orchestrator | Monday 05 May 2025 00:41:22 +0000 (0:00:00.212) 0:00:30.596 ************ 2025-05-05 00:41:22.768628 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:22.769909 | orchestrator | 2025-05-05 00:41:22.769949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:22.771437 | orchestrator | Monday 05 May 2025 00:41:22 +0000 (0:00:00.195) 0:00:30.791 ************ 2025-05-05 00:41:22.966809 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:22.968357 | orchestrator | 2025-05-05 00:41:22.968405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:22.969034 | orchestrator | Monday 05 May 2025 00:41:22 +0000 (0:00:00.200) 0:00:30.992 ************ 2025-05-05 00:41:23.178532 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:23.178869 | orchestrator | 2025-05-05 00:41:23.179651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:23.180564 | orchestrator | Monday 05 May 2025 00:41:23 +0000 (0:00:00.212) 0:00:31.204 ************ 2025-05-05 00:41:23.380830 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:23.381253 | orchestrator | 2025-05-05 00:41:23.381963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:23.386173 | orchestrator | Monday 05 May 2025 00:41:23 +0000 (0:00:00.201) 0:00:31.405 ************ 2025-05-05 00:41:23.573413 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:23.574187 | orchestrator | 2025-05-05 00:41:23.574857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:23.575256 | orchestrator | Monday 05 May 2025 00:41:23 +0000 (0:00:00.192) 0:00:31.598 ************ 2025-05-05 00:41:23.769346 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:23.770559 | orchestrator | 2025-05-05 00:41:23.771763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:23.773172 | orchestrator | Monday 05 May 2025 00:41:23 +0000 (0:00:00.196) 0:00:31.794 ************ 2025-05-05 00:41:24.620276 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e) 2025-05-05 00:41:24.621328 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e) 2025-05-05 00:41:24.622465 | orchestrator | 2025-05-05 00:41:24.623318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:24.624089 | orchestrator | Monday 05 May 2025 00:41:24 +0000 (0:00:00.848) 0:00:32.643 ************ 2025-05-05 00:41:25.042151 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370) 2025-05-05 00:41:25.042366 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370) 2025-05-05 00:41:25.043746 | orchestrator | 2025-05-05 00:41:25.044061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:25.045239 | orchestrator | Monday 05 May 2025 00:41:25 +0000 (0:00:00.422) 0:00:33.065 ************ 2025-05-05 00:41:25.484395 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10) 2025-05-05 00:41:25.485056 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10) 2025-05-05 00:41:25.485693 | orchestrator | 2025-05-05 00:41:25.487648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:25.488446 | orchestrator | Monday 05 May 2025 00:41:25 +0000 (0:00:00.443) 0:00:33.508 ************ 2025-05-05 00:41:25.927064 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d) 2025-05-05 00:41:25.927288 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d) 2025-05-05 00:41:25.929982 | orchestrator | 2025-05-05 00:41:26.289168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:41:26.289364 | orchestrator | Monday 05 May 2025 00:41:25 +0000 (0:00:00.441) 0:00:33.950 ************ 2025-05-05 00:41:26.289398 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-05 00:41:26.289452 | orchestrator | 2025-05-05 00:41:26.290758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:26.291050 | orchestrator | Monday 05 May 2025 00:41:26 +0000 (0:00:00.363) 0:00:34.313 ************ 2025-05-05 00:41:26.695643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-05 00:41:26.696239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-05 00:41:26.698063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-05 00:41:26.698828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-05 00:41:26.700115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-05 00:41:26.701428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-05 00:41:26.701937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-05 00:41:26.703411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-05 00:41:26.704205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-05 00:41:26.704830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-05 00:41:26.705221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-05 00:41:26.705772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-05 00:41:26.706521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-05 00:41:26.706865 | orchestrator | 2025-05-05 00:41:26.707355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:26.707688 | orchestrator | Monday 05 May 2025 00:41:26 +0000 (0:00:00.406) 0:00:34.720 ************ 2025-05-05 00:41:26.924793 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:26.925683 | orchestrator | 2025-05-05 00:41:26.926520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:26.927328 | orchestrator | Monday 05 May 2025 00:41:26 +0000 (0:00:00.227) 0:00:34.948 ************ 2025-05-05 00:41:27.138508 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:27.139037 | orchestrator | 2025-05-05 00:41:27.140080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:27.140233 | orchestrator | Monday 05 May 2025 00:41:27 +0000 (0:00:00.215) 0:00:35.164 ************ 2025-05-05 00:41:27.357163 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:27.358178 | orchestrator | 2025-05-05 00:41:27.358988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:27.360137 | orchestrator | Monday 05 May 2025 00:41:27 +0000 (0:00:00.217) 0:00:35.381 ************ 2025-05-05 00:41:27.565347 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:27.565856 | orchestrator | 2025-05-05 00:41:27.566741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:27.568148 | orchestrator | Monday 05 May 2025 00:41:27 +0000 (0:00:00.209) 0:00:35.590 ************ 2025-05-05 00:41:28.159144 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:28.159342 | orchestrator | 2025-05-05 00:41:28.159798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:28.163343 | orchestrator | Monday 05 May 2025 00:41:28 +0000 (0:00:00.592) 0:00:36.183 ************ 2025-05-05 00:41:28.348492 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:28.349628 | orchestrator | 2025-05-05 00:41:28.350313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:28.350931 | orchestrator | Monday 05 May 2025 00:41:28 +0000 (0:00:00.190) 0:00:36.373 ************ 2025-05-05 00:41:28.556338 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:28.557551 | orchestrator | 2025-05-05 00:41:28.558114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:28.559179 | orchestrator | Monday 05 May 2025 00:41:28 +0000 (0:00:00.207) 0:00:36.581 ************ 2025-05-05 00:41:28.763266 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:28.763873 | orchestrator | 2025-05-05 00:41:28.764870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:28.764920 | orchestrator | Monday 05 May 2025 00:41:28 +0000 (0:00:00.207) 0:00:36.789 ************ 2025-05-05 00:41:29.424801 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-05 00:41:29.427913 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-05 00:41:29.427988 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-05 00:41:29.428243 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-05 00:41:29.428272 | orchestrator | 2025-05-05 00:41:29.428288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:29.428313 | orchestrator | Monday 05 May 2025 00:41:29 +0000 (0:00:00.657) 0:00:37.446 ************ 2025-05-05 00:41:29.630687 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:29.630930 | orchestrator | 2025-05-05 00:41:29.631455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:29.631895 | orchestrator | Monday 05 May 2025 00:41:29 +0000 (0:00:00.200) 0:00:37.647 ************ 2025-05-05 00:41:29.861398 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:29.862865 | orchestrator | 2025-05-05 00:41:29.864933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:29.865749 | orchestrator | Monday 05 May 2025 00:41:29 +0000 (0:00:00.237) 0:00:37.884 ************ 2025-05-05 00:41:30.065975 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:30.066237 | orchestrator | 2025-05-05 00:41:30.066269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:41:30.069065 | orchestrator | Monday 05 May 2025 00:41:30 +0000 (0:00:00.204) 0:00:38.089 ************ 2025-05-05 00:41:30.281878 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:30.282209 | orchestrator | 2025-05-05 00:41:30.282250 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-05 00:41:30.282274 | orchestrator | Monday 05 May 2025 00:41:30 +0000 (0:00:00.216) 0:00:38.306 ************ 2025-05-05 00:41:30.485467 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-05 00:41:30.485633 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-05 00:41:30.485656 | orchestrator | 2025-05-05 00:41:30.485677 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-05 00:41:30.486165 | orchestrator | Monday 05 May 2025 00:41:30 +0000 (0:00:00.203) 0:00:38.509 ************ 2025-05-05 00:41:30.872438 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:30.873491 | orchestrator | 2025-05-05 00:41:30.875118 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-05 00:41:30.876041 | orchestrator | Monday 05 May 2025 00:41:30 +0000 (0:00:00.386) 0:00:38.896 ************ 2025-05-05 00:41:31.017659 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:31.018191 | orchestrator | 2025-05-05 00:41:31.019660 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-05 00:41:31.020910 | orchestrator | Monday 05 May 2025 00:41:31 +0000 (0:00:00.145) 0:00:39.042 ************ 2025-05-05 00:41:31.160279 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:31.160636 | orchestrator | 2025-05-05 00:41:31.161962 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-05 00:41:31.162434 | orchestrator | Monday 05 May 2025 00:41:31 +0000 (0:00:00.143) 0:00:39.185 ************ 2025-05-05 00:41:31.302575 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:41:31.303940 | orchestrator | 2025-05-05 00:41:31.304053 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-05 00:41:31.305034 | orchestrator | Monday 05 May 2025 00:41:31 +0000 (0:00:00.141) 0:00:39.327 ************ 2025-05-05 00:41:31.497128 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ded391-41bb-58c4-acef-51f998367f5e'}}) 2025-05-05 00:41:31.497960 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}}) 2025-05-05 00:41:31.499530 | orchestrator | 2025-05-05 00:41:31.500231 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-05 00:41:31.502890 | orchestrator | Monday 05 May 2025 00:41:31 +0000 (0:00:00.194) 0:00:39.521 ************ 2025-05-05 00:41:31.679367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ded391-41bb-58c4-acef-51f998367f5e'}})  2025-05-05 00:41:31.679642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}})  2025-05-05 00:41:31.683372 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:31.683572 | orchestrator | 2025-05-05 00:41:31.683605 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-05 00:41:31.684353 | orchestrator | Monday 05 May 2025 00:41:31 +0000 (0:00:00.182) 0:00:39.703 ************ 2025-05-05 00:41:31.849568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ded391-41bb-58c4-acef-51f998367f5e'}})  2025-05-05 00:41:31.850241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}})  2025-05-05 00:41:31.851203 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:31.853492 | orchestrator | 2025-05-05 00:41:31.854347 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-05 00:41:31.854395 | orchestrator | Monday 05 May 2025 00:41:31 +0000 (0:00:00.170) 0:00:39.874 ************ 2025-05-05 00:41:32.011257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ded391-41bb-58c4-acef-51f998367f5e'}})  2025-05-05 00:41:32.012485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}})  2025-05-05 00:41:32.013397 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:32.014349 | orchestrator | 2025-05-05 00:41:32.016388 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-05 00:41:32.176561 | orchestrator | Monday 05 May 2025 00:41:32 +0000 (0:00:00.161) 0:00:40.036 ************ 2025-05-05 00:41:32.176756 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:41:32.178322 | orchestrator | 2025-05-05 00:41:32.178435 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-05 00:41:32.178460 | orchestrator | Monday 05 May 2025 00:41:32 +0000 (0:00:00.163) 0:00:40.200 ************ 2025-05-05 00:41:32.329120 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:41:32.329763 | orchestrator | 2025-05-05 00:41:32.330330 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-05 00:41:32.331033 | orchestrator | Monday 05 May 2025 00:41:32 +0000 (0:00:00.153) 0:00:40.354 ************ 2025-05-05 00:41:32.470889 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:32.471131 | orchestrator | 2025-05-05 00:41:32.472322 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-05 00:41:32.472448 | orchestrator | Monday 05 May 2025 00:41:32 +0000 (0:00:00.140) 0:00:40.494 ************ 2025-05-05 00:41:32.626862 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:32.627623 | orchestrator | 2025-05-05 00:41:32.630442 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-05 00:41:32.631121 | orchestrator | Monday 05 May 2025 00:41:32 +0000 (0:00:00.156) 0:00:40.651 ************ 2025-05-05 00:41:33.031042 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:33.031801 | orchestrator | 2025-05-05 00:41:33.032548 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-05 00:41:33.033415 | orchestrator | Monday 05 May 2025 00:41:33 +0000 (0:00:00.403) 0:00:41.054 ************ 2025-05-05 00:41:33.183642 | orchestrator | ok: [testbed-node-5] => { 2025-05-05 00:41:33.183934 | orchestrator |  "ceph_osd_devices": { 2025-05-05 00:41:33.184847 | orchestrator |  "sdb": { 2025-05-05 00:41:33.186232 | orchestrator |  "osd_lvm_uuid": "19ded391-41bb-58c4-acef-51f998367f5e" 2025-05-05 00:41:33.188975 | orchestrator |  }, 2025-05-05 00:41:33.189783 | orchestrator |  "sdc": { 2025-05-05 00:41:33.189821 | orchestrator |  "osd_lvm_uuid": "5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e" 2025-05-05 00:41:33.189843 | orchestrator |  } 2025-05-05 00:41:33.190434 | orchestrator |  } 2025-05-05 00:41:33.190876 | orchestrator | } 2025-05-05 00:41:33.191401 | orchestrator | 2025-05-05 00:41:33.192030 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-05 00:41:33.192673 | orchestrator | Monday 05 May 2025 00:41:33 +0000 (0:00:00.152) 0:00:41.207 ************ 2025-05-05 00:41:33.335444 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:33.336281 | orchestrator | 2025-05-05 00:41:33.336423 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-05 00:41:33.337407 | orchestrator | Monday 05 May 2025 00:41:33 +0000 (0:00:00.152) 0:00:41.360 ************ 2025-05-05 00:41:33.482521 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:33.482969 | orchestrator | 2025-05-05 00:41:33.483861 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-05 00:41:33.485080 | orchestrator | Monday 05 May 2025 00:41:33 +0000 (0:00:00.147) 0:00:41.508 ************ 2025-05-05 00:41:33.637317 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:41:33.637978 | orchestrator | 2025-05-05 00:41:33.638949 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-05 00:41:33.639439 | orchestrator | Monday 05 May 2025 00:41:33 +0000 (0:00:00.147) 0:00:41.655 ************ 2025-05-05 00:41:33.924130 | orchestrator | changed: [testbed-node-5] => { 2025-05-05 00:41:33.924634 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-05 00:41:33.925013 | orchestrator |  "ceph_osd_devices": { 2025-05-05 00:41:33.925904 | orchestrator |  "sdb": { 2025-05-05 00:41:33.926689 | orchestrator |  "osd_lvm_uuid": "19ded391-41bb-58c4-acef-51f998367f5e" 2025-05-05 00:41:33.927351 | orchestrator |  }, 2025-05-05 00:41:33.930369 | orchestrator |  "sdc": { 2025-05-05 00:41:33.933019 | orchestrator |  "osd_lvm_uuid": "5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e" 2025-05-05 00:41:33.933604 | orchestrator |  } 2025-05-05 00:41:33.934540 | orchestrator |  }, 2025-05-05 00:41:33.935605 | orchestrator |  "lvm_volumes": [ 2025-05-05 00:41:33.935870 | orchestrator |  { 2025-05-05 00:41:33.936638 | orchestrator |  "data": "osd-block-19ded391-41bb-58c4-acef-51f998367f5e", 2025-05-05 00:41:33.937269 | orchestrator |  "data_vg": "ceph-19ded391-41bb-58c4-acef-51f998367f5e" 2025-05-05 00:41:33.937975 | orchestrator |  }, 2025-05-05 00:41:33.939365 | orchestrator |  { 2025-05-05 00:41:33.939861 | orchestrator |  "data": "osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e", 2025-05-05 00:41:33.940561 | orchestrator |  "data_vg": "ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e" 2025-05-05 00:41:33.941232 | orchestrator |  } 2025-05-05 00:41:33.942124 | orchestrator |  ] 2025-05-05 00:41:33.942248 | orchestrator |  } 2025-05-05 00:41:33.942800 | orchestrator | } 2025-05-05 00:41:33.943369 | orchestrator | 2025-05-05 00:41:33.944003 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-05 00:41:33.944424 | orchestrator | Monday 05 May 2025 00:41:33 +0000 (0:00:00.293) 0:00:41.948 ************ 2025-05-05 00:41:35.047212 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-05 00:41:35.048595 | orchestrator | 2025-05-05 00:41:35.050403 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:41:35.050475 | orchestrator | 2025-05-05 00:41:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:41:35.051337 | orchestrator | 2025-05-05 00:41:35 | INFO  | Please wait and do not abort execution. 2025-05-05 00:41:35.051377 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-05 00:41:35.052233 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-05 00:41:35.053092 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-05 00:41:35.054105 | orchestrator | 2025-05-05 00:41:35.054506 | orchestrator | 2025-05-05 00:41:35.055952 | orchestrator | 2025-05-05 00:41:35.056316 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:41:35.057030 | orchestrator | Monday 05 May 2025 00:41:35 +0000 (0:00:01.122) 0:00:43.070 ************ 2025-05-05 00:41:35.057068 | orchestrator | =============================================================================== 2025-05-05 00:41:35.057756 | orchestrator | Write configuration file ------------------------------------------------ 4.22s 2025-05-05 00:41:35.058441 | orchestrator | Add known links to the list of available block devices ------------------ 1.59s 2025-05-05 00:41:35.058887 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2025-05-05 00:41:35.059529 | orchestrator | Print configuration data ------------------------------------------------ 1.00s 2025-05-05 00:41:35.060071 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2025-05-05 00:41:35.060843 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-05-05 00:41:35.061417 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-05-05 00:41:35.062222 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.81s 2025-05-05 00:41:35.062564 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-05-05 00:41:35.063319 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-05-05 00:41:35.063727 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-05-05 00:41:35.064103 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.73s 2025-05-05 00:41:35.064599 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-05-05 00:41:35.065025 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-05-05 00:41:35.065748 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.65s 2025-05-05 00:41:35.066324 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-05 00:41:35.066863 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-05 00:41:35.067253 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.59s 2025-05-05 00:41:35.067885 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-05-05 00:41:35.068300 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.57s 2025-05-05 00:41:47.230441 | orchestrator | 2025-05-05 00:41:47 | INFO  | Task 2224b97c-3d1c-4647-8325-3573fb281b04 is running in background. Output coming soon. 2025-05-05 00:42:10.981875 | orchestrator | 2025-05-05 00:42:03 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-05 00:42:12.613601 | orchestrator | 2025-05-05 00:42:03 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-05 00:42:12.613781 | orchestrator | 2025-05-05 00:42:03 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-05 00:42:12.613805 | orchestrator | 2025-05-05 00:42:03 | INFO  | Handling group overwrites in 99-overwrite 2025-05-05 00:42:12.613835 | orchestrator | 2025-05-05 00:42:03 | INFO  | Removing group frr:children from 60-generic 2025-05-05 00:42:12.613851 | orchestrator | 2025-05-05 00:42:03 | INFO  | Removing group storage:children from 50-kolla 2025-05-05 00:42:12.613879 | orchestrator | 2025-05-05 00:42:03 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-05 00:42:12.613895 | orchestrator | 2025-05-05 00:42:03 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-05 00:42:12.613910 | orchestrator | 2025-05-05 00:42:03 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-05 00:42:12.613924 | orchestrator | 2025-05-05 00:42:03 | INFO  | Handling group overwrites in 20-roles 2025-05-05 00:42:12.613939 | orchestrator | 2025-05-05 00:42:03 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-05 00:42:12.613954 | orchestrator | 2025-05-05 00:42:04 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-05 00:42:12.613968 | orchestrator | 2025-05-05 00:42:10 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-05 00:42:12.614000 | orchestrator | 2025-05-05 00:42:12 | INFO  | Task ea8cbaf3-3444-47da-bf9c-5c943a4e95a1 (ceph-create-lvm-devices) was prepared for execution. 2025-05-05 00:42:15.503679 | orchestrator | 2025-05-05 00:42:12 | INFO  | It takes a moment until task ea8cbaf3-3444-47da-bf9c-5c943a4e95a1 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-05 00:42:15.503985 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-05 00:42:15.997046 | orchestrator | 2025-05-05 00:42:15.999016 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-05 00:42:16.239163 | orchestrator | 2025-05-05 00:42:16.239257 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-05 00:42:16.239268 | orchestrator | Monday 05 May 2025 00:42:15 +0000 (0:00:00.430) 0:00:00.430 ************ 2025-05-05 00:42:16.239288 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-05 00:42:16.239436 | orchestrator | 2025-05-05 00:42:16.242681 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-05 00:42:16.481805 | orchestrator | Monday 05 May 2025 00:42:16 +0000 (0:00:00.233) 0:00:00.664 ************ 2025-05-05 00:42:16.481931 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:16.484904 | orchestrator | 2025-05-05 00:42:16.485745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:16.488488 | orchestrator | Monday 05 May 2025 00:42:16 +0000 (0:00:00.250) 0:00:00.914 ************ 2025-05-05 00:42:17.246877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-05 00:42:17.250219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-05 00:42:17.250294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-05 00:42:17.251107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-05 00:42:17.254520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-05 00:42:17.254618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-05 00:42:17.255554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-05 00:42:17.256636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-05 00:42:17.258464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-05 00:42:17.259064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-05 00:42:17.260019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-05 00:42:17.260534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-05 00:42:17.261168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-05 00:42:17.261815 | orchestrator | 2025-05-05 00:42:17.262448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:17.262885 | orchestrator | Monday 05 May 2025 00:42:17 +0000 (0:00:00.765) 0:00:01.679 ************ 2025-05-05 00:42:17.451897 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:17.652528 | orchestrator | 2025-05-05 00:42:17.652648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:17.652667 | orchestrator | Monday 05 May 2025 00:42:17 +0000 (0:00:00.205) 0:00:01.885 ************ 2025-05-05 00:42:17.652741 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:17.653525 | orchestrator | 2025-05-05 00:42:17.654380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:17.654805 | orchestrator | Monday 05 May 2025 00:42:17 +0000 (0:00:00.202) 0:00:02.087 ************ 2025-05-05 00:42:17.845468 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:17.845881 | orchestrator | 2025-05-05 00:42:17.846201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:17.849504 | orchestrator | Monday 05 May 2025 00:42:17 +0000 (0:00:00.191) 0:00:02.278 ************ 2025-05-05 00:42:18.046509 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:18.046765 | orchestrator | 2025-05-05 00:42:18.047394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:18.047764 | orchestrator | Monday 05 May 2025 00:42:18 +0000 (0:00:00.202) 0:00:02.480 ************ 2025-05-05 00:42:18.268038 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:18.271049 | orchestrator | 2025-05-05 00:42:18.271174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:18.487032 | orchestrator | Monday 05 May 2025 00:42:18 +0000 (0:00:00.211) 0:00:02.692 ************ 2025-05-05 00:42:18.487166 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:18.488019 | orchestrator | 2025-05-05 00:42:18.488393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:18.491811 | orchestrator | Monday 05 May 2025 00:42:18 +0000 (0:00:00.227) 0:00:02.919 ************ 2025-05-05 00:42:18.692092 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:18.692804 | orchestrator | 2025-05-05 00:42:18.693321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:18.694190 | orchestrator | Monday 05 May 2025 00:42:18 +0000 (0:00:00.206) 0:00:03.126 ************ 2025-05-05 00:42:18.895204 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:18.895583 | orchestrator | 2025-05-05 00:42:18.896295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:18.896824 | orchestrator | Monday 05 May 2025 00:42:18 +0000 (0:00:00.203) 0:00:03.329 ************ 2025-05-05 00:42:19.503791 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4) 2025-05-05 00:42:19.503969 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4) 2025-05-05 00:42:19.503993 | orchestrator | 2025-05-05 00:42:19.504015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:19.506889 | orchestrator | Monday 05 May 2025 00:42:19 +0000 (0:00:00.605) 0:00:03.934 ************ 2025-05-05 00:42:20.262293 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7) 2025-05-05 00:42:20.264258 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7) 2025-05-05 00:42:20.264877 | orchestrator | 2025-05-05 00:42:20.265752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:20.270310 | orchestrator | Monday 05 May 2025 00:42:20 +0000 (0:00:00.761) 0:00:04.696 ************ 2025-05-05 00:42:20.676799 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6) 2025-05-05 00:42:20.677525 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6) 2025-05-05 00:42:20.679251 | orchestrator | 2025-05-05 00:42:20.683085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:21.096852 | orchestrator | Monday 05 May 2025 00:42:20 +0000 (0:00:00.413) 0:00:05.110 ************ 2025-05-05 00:42:21.096986 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8) 2025-05-05 00:42:21.097732 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8) 2025-05-05 00:42:21.098546 | orchestrator | 2025-05-05 00:42:21.099808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:21.103680 | orchestrator | Monday 05 May 2025 00:42:21 +0000 (0:00:00.420) 0:00:05.531 ************ 2025-05-05 00:42:21.428380 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-05 00:42:21.432904 | orchestrator | 2025-05-05 00:42:21.432991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:21.434172 | orchestrator | Monday 05 May 2025 00:42:21 +0000 (0:00:00.329) 0:00:05.860 ************ 2025-05-05 00:42:21.916102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-05 00:42:21.916498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-05 00:42:21.917244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-05 00:42:21.917910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-05 00:42:21.919204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-05 00:42:21.920187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-05 00:42:21.920220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-05 00:42:21.922229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-05 00:42:21.922330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-05 00:42:21.923932 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-05 00:42:21.924206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-05 00:42:21.924236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-05 00:42:21.924251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-05 00:42:21.924265 | orchestrator | 2025-05-05 00:42:21.924285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:21.924609 | orchestrator | Monday 05 May 2025 00:42:21 +0000 (0:00:00.489) 0:00:06.350 ************ 2025-05-05 00:42:22.119971 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:22.120591 | orchestrator | 2025-05-05 00:42:22.121283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:22.122101 | orchestrator | Monday 05 May 2025 00:42:22 +0000 (0:00:00.204) 0:00:06.554 ************ 2025-05-05 00:42:22.325036 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:22.325394 | orchestrator | 2025-05-05 00:42:22.326313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:22.524626 | orchestrator | Monday 05 May 2025 00:42:22 +0000 (0:00:00.204) 0:00:06.758 ************ 2025-05-05 00:42:22.524818 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:22.732453 | orchestrator | 2025-05-05 00:42:22.732579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:22.732607 | orchestrator | Monday 05 May 2025 00:42:22 +0000 (0:00:00.197) 0:00:06.956 ************ 2025-05-05 00:42:22.732687 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:22.732917 | orchestrator | 2025-05-05 00:42:22.733986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:22.735498 | orchestrator | Monday 05 May 2025 00:42:22 +0000 (0:00:00.209) 0:00:07.166 ************ 2025-05-05 00:42:23.285371 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:23.286366 | orchestrator | 2025-05-05 00:42:23.287415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:23.289495 | orchestrator | Monday 05 May 2025 00:42:23 +0000 (0:00:00.552) 0:00:07.719 ************ 2025-05-05 00:42:23.497589 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:23.498627 | orchestrator | 2025-05-05 00:42:23.500879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:23.501131 | orchestrator | Monday 05 May 2025 00:42:23 +0000 (0:00:00.211) 0:00:07.930 ************ 2025-05-05 00:42:23.694251 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:23.695007 | orchestrator | 2025-05-05 00:42:23.695070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:23.695915 | orchestrator | Monday 05 May 2025 00:42:23 +0000 (0:00:00.197) 0:00:08.127 ************ 2025-05-05 00:42:23.903108 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:23.903357 | orchestrator | 2025-05-05 00:42:23.903415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:23.903435 | orchestrator | Monday 05 May 2025 00:42:23 +0000 (0:00:00.209) 0:00:08.337 ************ 2025-05-05 00:42:24.546865 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-05 00:42:24.547105 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-05 00:42:24.550114 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-05 00:42:24.550425 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-05 00:42:24.550461 | orchestrator | 2025-05-05 00:42:24.550491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:24.550889 | orchestrator | Monday 05 May 2025 00:42:24 +0000 (0:00:00.641) 0:00:08.978 ************ 2025-05-05 00:42:24.748466 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:24.749155 | orchestrator | 2025-05-05 00:42:24.749345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:24.750113 | orchestrator | Monday 05 May 2025 00:42:24 +0000 (0:00:00.201) 0:00:09.180 ************ 2025-05-05 00:42:24.952331 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:24.952669 | orchestrator | 2025-05-05 00:42:24.953606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:24.956254 | orchestrator | Monday 05 May 2025 00:42:24 +0000 (0:00:00.204) 0:00:09.384 ************ 2025-05-05 00:42:25.150624 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:25.150827 | orchestrator | 2025-05-05 00:42:25.151880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:25.152505 | orchestrator | Monday 05 May 2025 00:42:25 +0000 (0:00:00.200) 0:00:09.584 ************ 2025-05-05 00:42:25.346112 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:25.347443 | orchestrator | 2025-05-05 00:42:25.349266 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-05 00:42:25.349412 | orchestrator | Monday 05 May 2025 00:42:25 +0000 (0:00:00.195) 0:00:09.780 ************ 2025-05-05 00:42:25.473413 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:25.473609 | orchestrator | 2025-05-05 00:42:25.474097 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-05 00:42:25.474658 | orchestrator | Monday 05 May 2025 00:42:25 +0000 (0:00:00.127) 0:00:09.907 ************ 2025-05-05 00:42:25.668433 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b45d62aa-c8ca-51ec-bff2-6c96656db621'}}) 2025-05-05 00:42:25.669243 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac6a629e-412f-52b8-abc2-7f30e47159be'}}) 2025-05-05 00:42:25.670770 | orchestrator | 2025-05-05 00:42:25.671417 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-05 00:42:25.672276 | orchestrator | Monday 05 May 2025 00:42:25 +0000 (0:00:00.194) 0:00:10.102 ************ 2025-05-05 00:42:27.831951 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'}) 2025-05-05 00:42:27.832524 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'}) 2025-05-05 00:42:27.832772 | orchestrator | 2025-05-05 00:42:27.833525 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-05 00:42:27.835978 | orchestrator | Monday 05 May 2025 00:42:27 +0000 (0:00:02.162) 0:00:12.264 ************ 2025-05-05 00:42:28.009970 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:28.010197 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:28.010765 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:28.011360 | orchestrator | 2025-05-05 00:42:28.011846 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-05 00:42:28.012460 | orchestrator | Monday 05 May 2025 00:42:28 +0000 (0:00:00.177) 0:00:12.442 ************ 2025-05-05 00:42:29.469610 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'}) 2025-05-05 00:42:29.469939 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'}) 2025-05-05 00:42:29.472122 | orchestrator | 2025-05-05 00:42:29.472500 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-05 00:42:29.473884 | orchestrator | Monday 05 May 2025 00:42:29 +0000 (0:00:01.459) 0:00:13.901 ************ 2025-05-05 00:42:29.633090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:29.633568 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:29.634549 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:29.635153 | orchestrator | 2025-05-05 00:42:29.636300 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-05 00:42:29.636631 | orchestrator | Monday 05 May 2025 00:42:29 +0000 (0:00:00.165) 0:00:14.067 ************ 2025-05-05 00:42:29.775385 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:29.776933 | orchestrator | 2025-05-05 00:42:29.777670 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-05 00:42:29.780624 | orchestrator | Monday 05 May 2025 00:42:29 +0000 (0:00:00.141) 0:00:14.209 ************ 2025-05-05 00:42:29.947604 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:29.947863 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:29.948744 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:29.949398 | orchestrator | 2025-05-05 00:42:29.950153 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-05 00:42:29.950914 | orchestrator | Monday 05 May 2025 00:42:29 +0000 (0:00:00.166) 0:00:14.375 ************ 2025-05-05 00:42:30.096412 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:30.097170 | orchestrator | 2025-05-05 00:42:30.097515 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-05 00:42:30.098321 | orchestrator | Monday 05 May 2025 00:42:30 +0000 (0:00:00.153) 0:00:14.529 ************ 2025-05-05 00:42:30.271897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:30.272094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:30.272120 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:30.272143 | orchestrator | 2025-05-05 00:42:30.272396 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-05 00:42:30.272599 | orchestrator | Monday 05 May 2025 00:42:30 +0000 (0:00:00.176) 0:00:14.706 ************ 2025-05-05 00:42:30.588198 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:30.590584 | orchestrator | 2025-05-05 00:42:30.592261 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-05 00:42:30.775832 | orchestrator | Monday 05 May 2025 00:42:30 +0000 (0:00:00.315) 0:00:15.021 ************ 2025-05-05 00:42:30.775957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:30.776992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:30.777887 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:30.778834 | orchestrator | 2025-05-05 00:42:30.779436 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-05 00:42:30.780814 | orchestrator | Monday 05 May 2025 00:42:30 +0000 (0:00:00.187) 0:00:15.209 ************ 2025-05-05 00:42:30.923551 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:30.924583 | orchestrator | 2025-05-05 00:42:30.925549 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-05 00:42:30.927211 | orchestrator | Monday 05 May 2025 00:42:30 +0000 (0:00:00.146) 0:00:15.356 ************ 2025-05-05 00:42:31.105340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:31.105903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:31.108044 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:31.109395 | orchestrator | 2025-05-05 00:42:31.109520 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-05 00:42:31.109547 | orchestrator | Monday 05 May 2025 00:42:31 +0000 (0:00:00.182) 0:00:15.538 ************ 2025-05-05 00:42:31.266103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:31.266889 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:31.268291 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:31.269263 | orchestrator | 2025-05-05 00:42:31.269825 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-05 00:42:31.270635 | orchestrator | Monday 05 May 2025 00:42:31 +0000 (0:00:00.161) 0:00:15.699 ************ 2025-05-05 00:42:31.449043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:31.449463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:31.450358 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:31.451261 | orchestrator | 2025-05-05 00:42:31.451851 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-05 00:42:31.454582 | orchestrator | Monday 05 May 2025 00:42:31 +0000 (0:00:00.183) 0:00:15.883 ************ 2025-05-05 00:42:31.580892 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:31.581184 | orchestrator | 2025-05-05 00:42:31.582510 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-05 00:42:31.583465 | orchestrator | Monday 05 May 2025 00:42:31 +0000 (0:00:00.129) 0:00:16.012 ************ 2025-05-05 00:42:31.727498 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:31.727953 | orchestrator | 2025-05-05 00:42:31.729348 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-05 00:42:31.729976 | orchestrator | Monday 05 May 2025 00:42:31 +0000 (0:00:00.147) 0:00:16.160 ************ 2025-05-05 00:42:31.873573 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:31.874494 | orchestrator | 2025-05-05 00:42:31.874651 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-05 00:42:31.875173 | orchestrator | Monday 05 May 2025 00:42:31 +0000 (0:00:00.146) 0:00:16.307 ************ 2025-05-05 00:42:32.026636 | orchestrator | ok: [testbed-node-3] => { 2025-05-05 00:42:32.027420 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-05 00:42:32.030441 | orchestrator | } 2025-05-05 00:42:32.164605 | orchestrator | 2025-05-05 00:42:32.164812 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-05 00:42:32.164842 | orchestrator | Monday 05 May 2025 00:42:32 +0000 (0:00:00.152) 0:00:16.459 ************ 2025-05-05 00:42:32.164875 | orchestrator | ok: [testbed-node-3] => { 2025-05-05 00:42:32.165522 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-05 00:42:32.166960 | orchestrator | } 2025-05-05 00:42:32.167779 | orchestrator | 2025-05-05 00:42:32.168533 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-05 00:42:32.169291 | orchestrator | Monday 05 May 2025 00:42:32 +0000 (0:00:00.138) 0:00:16.598 ************ 2025-05-05 00:42:32.316641 | orchestrator | ok: [testbed-node-3] => { 2025-05-05 00:42:32.316888 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-05 00:42:32.316922 | orchestrator | } 2025-05-05 00:42:32.317511 | orchestrator | 2025-05-05 00:42:32.319328 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-05 00:42:32.323738 | orchestrator | Monday 05 May 2025 00:42:32 +0000 (0:00:00.151) 0:00:16.749 ************ 2025-05-05 00:42:33.224972 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:33.225174 | orchestrator | 2025-05-05 00:42:33.226302 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-05 00:42:33.723836 | orchestrator | Monday 05 May 2025 00:42:33 +0000 (0:00:00.907) 0:00:17.657 ************ 2025-05-05 00:42:33.723971 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:33.725156 | orchestrator | 2025-05-05 00:42:33.727683 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-05 00:42:33.729174 | orchestrator | Monday 05 May 2025 00:42:33 +0000 (0:00:00.498) 0:00:18.156 ************ 2025-05-05 00:42:34.230141 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:34.230601 | orchestrator | 2025-05-05 00:42:34.232850 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-05 00:42:34.233425 | orchestrator | Monday 05 May 2025 00:42:34 +0000 (0:00:00.505) 0:00:18.662 ************ 2025-05-05 00:42:34.378010 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:34.378751 | orchestrator | 2025-05-05 00:42:34.379950 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-05 00:42:34.380313 | orchestrator | Monday 05 May 2025 00:42:34 +0000 (0:00:00.149) 0:00:18.811 ************ 2025-05-05 00:42:34.486353 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:34.486753 | orchestrator | 2025-05-05 00:42:34.487673 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-05 00:42:34.488259 | orchestrator | Monday 05 May 2025 00:42:34 +0000 (0:00:00.108) 0:00:18.919 ************ 2025-05-05 00:42:34.608109 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:34.608521 | orchestrator | 2025-05-05 00:42:34.609260 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-05 00:42:34.610624 | orchestrator | Monday 05 May 2025 00:42:34 +0000 (0:00:00.122) 0:00:19.042 ************ 2025-05-05 00:42:34.753937 | orchestrator | ok: [testbed-node-3] => { 2025-05-05 00:42:34.754235 | orchestrator |  "vgs_report": { 2025-05-05 00:42:34.754820 | orchestrator |  "vg": [] 2025-05-05 00:42:34.756110 | orchestrator |  } 2025-05-05 00:42:34.756674 | orchestrator | } 2025-05-05 00:42:34.757448 | orchestrator | 2025-05-05 00:42:34.759021 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-05 00:42:34.759569 | orchestrator | Monday 05 May 2025 00:42:34 +0000 (0:00:00.145) 0:00:19.187 ************ 2025-05-05 00:42:34.883189 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:34.884056 | orchestrator | 2025-05-05 00:42:34.886967 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-05 00:42:34.887556 | orchestrator | Monday 05 May 2025 00:42:34 +0000 (0:00:00.127) 0:00:19.314 ************ 2025-05-05 00:42:35.026517 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:35.027430 | orchestrator | 2025-05-05 00:42:35.028440 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-05 00:42:35.028933 | orchestrator | Monday 05 May 2025 00:42:35 +0000 (0:00:00.145) 0:00:19.460 ************ 2025-05-05 00:42:35.159329 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:35.159560 | orchestrator | 2025-05-05 00:42:35.160310 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-05 00:42:35.161889 | orchestrator | Monday 05 May 2025 00:42:35 +0000 (0:00:00.132) 0:00:19.592 ************ 2025-05-05 00:42:35.477858 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:35.478276 | orchestrator | 2025-05-05 00:42:35.479304 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-05 00:42:35.479810 | orchestrator | Monday 05 May 2025 00:42:35 +0000 (0:00:00.319) 0:00:19.912 ************ 2025-05-05 00:42:35.603466 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:35.603942 | orchestrator | 2025-05-05 00:42:35.604500 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-05 00:42:35.605239 | orchestrator | Monday 05 May 2025 00:42:35 +0000 (0:00:00.124) 0:00:20.036 ************ 2025-05-05 00:42:35.741809 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:35.742759 | orchestrator | 2025-05-05 00:42:35.743561 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-05 00:42:35.746237 | orchestrator | Monday 05 May 2025 00:42:35 +0000 (0:00:00.138) 0:00:20.175 ************ 2025-05-05 00:42:35.883630 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:35.884209 | orchestrator | 2025-05-05 00:42:35.884244 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-05 00:42:35.885122 | orchestrator | Monday 05 May 2025 00:42:35 +0000 (0:00:00.141) 0:00:20.317 ************ 2025-05-05 00:42:36.022492 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.023271 | orchestrator | 2025-05-05 00:42:36.023305 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-05 00:42:36.024440 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.139) 0:00:20.456 ************ 2025-05-05 00:42:36.150588 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.150828 | orchestrator | 2025-05-05 00:42:36.151994 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-05 00:42:36.152921 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.127) 0:00:20.583 ************ 2025-05-05 00:42:36.277531 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.278230 | orchestrator | 2025-05-05 00:42:36.279335 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-05 00:42:36.280230 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.128) 0:00:20.711 ************ 2025-05-05 00:42:36.417895 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.418760 | orchestrator | 2025-05-05 00:42:36.420206 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-05 00:42:36.422160 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.140) 0:00:20.851 ************ 2025-05-05 00:42:36.552063 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.552829 | orchestrator | 2025-05-05 00:42:36.553503 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-05 00:42:36.554679 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.134) 0:00:20.986 ************ 2025-05-05 00:42:36.691024 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.692331 | orchestrator | 2025-05-05 00:42:36.693983 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-05 00:42:36.694913 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.136) 0:00:21.122 ************ 2025-05-05 00:42:36.830385 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.830612 | orchestrator | 2025-05-05 00:42:36.831425 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-05 00:42:36.832201 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.141) 0:00:21.264 ************ 2025-05-05 00:42:36.997867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:36.998929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:36.998997 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:36.999487 | orchestrator | 2025-05-05 00:42:37.000852 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-05 00:42:37.001330 | orchestrator | Monday 05 May 2025 00:42:36 +0000 (0:00:00.167) 0:00:21.431 ************ 2025-05-05 00:42:37.334293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:37.334690 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:37.337470 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:37.514456 | orchestrator | 2025-05-05 00:42:37.514569 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-05 00:42:37.514587 | orchestrator | Monday 05 May 2025 00:42:37 +0000 (0:00:00.335) 0:00:21.766 ************ 2025-05-05 00:42:37.514619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:37.514690 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:37.515053 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:37.515443 | orchestrator | 2025-05-05 00:42:37.515852 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-05 00:42:37.516176 | orchestrator | Monday 05 May 2025 00:42:37 +0000 (0:00:00.182) 0:00:21.949 ************ 2025-05-05 00:42:37.678014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:37.681811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:37.681886 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:37.681904 | orchestrator | 2025-05-05 00:42:37.681920 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-05 00:42:37.681946 | orchestrator | Monday 05 May 2025 00:42:37 +0000 (0:00:00.159) 0:00:22.108 ************ 2025-05-05 00:42:37.832949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:37.836461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:37.837436 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:37.837472 | orchestrator | 2025-05-05 00:42:37.837494 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-05 00:42:37.999169 | orchestrator | Monday 05 May 2025 00:42:37 +0000 (0:00:00.157) 0:00:22.266 ************ 2025-05-05 00:42:37.999306 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:37.999406 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:37.999917 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:38.002656 | orchestrator | 2025-05-05 00:42:38.187237 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-05 00:42:38.187357 | orchestrator | Monday 05 May 2025 00:42:37 +0000 (0:00:00.164) 0:00:22.431 ************ 2025-05-05 00:42:38.187412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:38.187534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:38.187776 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:38.190537 | orchestrator | 2025-05-05 00:42:38.190846 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-05 00:42:38.190874 | orchestrator | Monday 05 May 2025 00:42:38 +0000 (0:00:00.188) 0:00:22.619 ************ 2025-05-05 00:42:38.351026 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:38.351249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:38.352072 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:38.352753 | orchestrator | 2025-05-05 00:42:38.353748 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-05 00:42:38.356101 | orchestrator | Monday 05 May 2025 00:42:38 +0000 (0:00:00.165) 0:00:22.785 ************ 2025-05-05 00:42:38.842275 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:38.842456 | orchestrator | 2025-05-05 00:42:38.843165 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-05 00:42:38.843941 | orchestrator | Monday 05 May 2025 00:42:38 +0000 (0:00:00.490) 0:00:23.275 ************ 2025-05-05 00:42:39.355800 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:39.356196 | orchestrator | 2025-05-05 00:42:39.356229 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-05 00:42:39.356252 | orchestrator | Monday 05 May 2025 00:42:39 +0000 (0:00:00.512) 0:00:23.787 ************ 2025-05-05 00:42:39.501008 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:42:39.501219 | orchestrator | 2025-05-05 00:42:39.502133 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-05 00:42:39.502744 | orchestrator | Monday 05 May 2025 00:42:39 +0000 (0:00:00.146) 0:00:23.934 ************ 2025-05-05 00:42:39.679218 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'vg_name': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'}) 2025-05-05 00:42:39.679746 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'vg_name': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'}) 2025-05-05 00:42:39.682973 | orchestrator | 2025-05-05 00:42:39.686766 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-05 00:42:39.688064 | orchestrator | Monday 05 May 2025 00:42:39 +0000 (0:00:00.177) 0:00:24.112 ************ 2025-05-05 00:42:40.051618 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:40.053408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:40.054136 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:40.055268 | orchestrator | 2025-05-05 00:42:40.056531 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-05 00:42:40.057124 | orchestrator | Monday 05 May 2025 00:42:40 +0000 (0:00:00.372) 0:00:24.485 ************ 2025-05-05 00:42:40.219660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:40.220834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:40.221367 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:40.221853 | orchestrator | 2025-05-05 00:42:40.222822 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-05 00:42:40.223512 | orchestrator | Monday 05 May 2025 00:42:40 +0000 (0:00:00.167) 0:00:24.652 ************ 2025-05-05 00:42:40.398663 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'})  2025-05-05 00:42:40.398887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'})  2025-05-05 00:42:40.399772 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:42:40.400213 | orchestrator | 2025-05-05 00:42:40.403090 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-05 00:42:41.077830 | orchestrator | Monday 05 May 2025 00:42:40 +0000 (0:00:00.179) 0:00:24.832 ************ 2025-05-05 00:42:41.077987 | orchestrator | ok: [testbed-node-3] => { 2025-05-05 00:42:41.078553 | orchestrator |  "lvm_report": { 2025-05-05 00:42:41.079458 | orchestrator |  "lv": [ 2025-05-05 00:42:41.080218 | orchestrator |  { 2025-05-05 00:42:41.082513 | orchestrator |  "lv_name": "osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be", 2025-05-05 00:42:41.082797 | orchestrator |  "vg_name": "ceph-ac6a629e-412f-52b8-abc2-7f30e47159be" 2025-05-05 00:42:41.082829 | orchestrator |  }, 2025-05-05 00:42:41.082851 | orchestrator |  { 2025-05-05 00:42:41.083656 | orchestrator |  "lv_name": "osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621", 2025-05-05 00:42:41.084770 | orchestrator |  "vg_name": "ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621" 2025-05-05 00:42:41.085662 | orchestrator |  } 2025-05-05 00:42:41.086461 | orchestrator |  ], 2025-05-05 00:42:41.086862 | orchestrator |  "pv": [ 2025-05-05 00:42:41.087492 | orchestrator |  { 2025-05-05 00:42:41.088089 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-05 00:42:41.088547 | orchestrator |  "vg_name": "ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621" 2025-05-05 00:42:41.089356 | orchestrator |  }, 2025-05-05 00:42:41.089921 | orchestrator |  { 2025-05-05 00:42:41.090519 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-05 00:42:41.091036 | orchestrator |  "vg_name": "ceph-ac6a629e-412f-52b8-abc2-7f30e47159be" 2025-05-05 00:42:41.091664 | orchestrator |  } 2025-05-05 00:42:41.092096 | orchestrator |  ] 2025-05-05 00:42:41.092763 | orchestrator |  } 2025-05-05 00:42:41.093442 | orchestrator | } 2025-05-05 00:42:41.093898 | orchestrator | 2025-05-05 00:42:41.094425 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-05 00:42:41.095032 | orchestrator | 2025-05-05 00:42:41.095407 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-05 00:42:41.095848 | orchestrator | Monday 05 May 2025 00:42:41 +0000 (0:00:00.679) 0:00:25.511 ************ 2025-05-05 00:42:41.538320 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-05 00:42:41.538676 | orchestrator | 2025-05-05 00:42:41.539431 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-05 00:42:41.539965 | orchestrator | Monday 05 May 2025 00:42:41 +0000 (0:00:00.459) 0:00:25.971 ************ 2025-05-05 00:42:41.794537 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:42:41.794796 | orchestrator | 2025-05-05 00:42:41.795012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:41.795917 | orchestrator | Monday 05 May 2025 00:42:41 +0000 (0:00:00.256) 0:00:26.228 ************ 2025-05-05 00:42:42.246306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-05 00:42:42.246475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-05 00:42:42.247618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-05 00:42:42.248514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-05 00:42:42.249428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-05 00:42:42.249484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-05 00:42:42.249737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-05 00:42:42.250376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-05 00:42:42.251302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-05 00:42:42.251974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-05 00:42:42.252387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-05 00:42:42.252862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-05 00:42:42.253530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-05 00:42:42.253756 | orchestrator | 2025-05-05 00:42:42.254143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:42.254543 | orchestrator | Monday 05 May 2025 00:42:42 +0000 (0:00:00.450) 0:00:26.678 ************ 2025-05-05 00:42:42.444211 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:42.444749 | orchestrator | 2025-05-05 00:42:42.445332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:42.445913 | orchestrator | Monday 05 May 2025 00:42:42 +0000 (0:00:00.198) 0:00:26.877 ************ 2025-05-05 00:42:42.636133 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:42.636778 | orchestrator | 2025-05-05 00:42:42.636799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:42.637179 | orchestrator | Monday 05 May 2025 00:42:42 +0000 (0:00:00.191) 0:00:27.069 ************ 2025-05-05 00:42:42.834738 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:42.835031 | orchestrator | 2025-05-05 00:42:42.835152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:42.835803 | orchestrator | Monday 05 May 2025 00:42:42 +0000 (0:00:00.198) 0:00:27.268 ************ 2025-05-05 00:42:43.039974 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:43.040416 | orchestrator | 2025-05-05 00:42:43.040462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:43.041236 | orchestrator | Monday 05 May 2025 00:42:43 +0000 (0:00:00.204) 0:00:27.473 ************ 2025-05-05 00:42:43.239907 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:43.240111 | orchestrator | 2025-05-05 00:42:43.240537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:43.241278 | orchestrator | Monday 05 May 2025 00:42:43 +0000 (0:00:00.199) 0:00:27.672 ************ 2025-05-05 00:42:43.432248 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:43.432613 | orchestrator | 2025-05-05 00:42:43.433684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:43.434206 | orchestrator | Monday 05 May 2025 00:42:43 +0000 (0:00:00.193) 0:00:27.866 ************ 2025-05-05 00:42:44.014643 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:44.015806 | orchestrator | 2025-05-05 00:42:44.018878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:44.227994 | orchestrator | Monday 05 May 2025 00:42:44 +0000 (0:00:00.580) 0:00:28.447 ************ 2025-05-05 00:42:44.228130 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:44.231195 | orchestrator | 2025-05-05 00:42:44.233550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:44.666661 | orchestrator | Monday 05 May 2025 00:42:44 +0000 (0:00:00.214) 0:00:28.661 ************ 2025-05-05 00:42:44.666809 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b) 2025-05-05 00:42:44.667368 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b) 2025-05-05 00:42:44.668334 | orchestrator | 2025-05-05 00:42:44.669265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:44.670104 | orchestrator | Monday 05 May 2025 00:42:44 +0000 (0:00:00.438) 0:00:29.099 ************ 2025-05-05 00:42:45.135160 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164) 2025-05-05 00:42:45.135901 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164) 2025-05-05 00:42:45.137569 | orchestrator | 2025-05-05 00:42:45.140939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:45.141637 | orchestrator | Monday 05 May 2025 00:42:45 +0000 (0:00:00.468) 0:00:29.568 ************ 2025-05-05 00:42:45.560467 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170) 2025-05-05 00:42:45.561827 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170) 2025-05-05 00:42:45.562432 | orchestrator | 2025-05-05 00:42:45.562920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:45.563482 | orchestrator | Monday 05 May 2025 00:42:45 +0000 (0:00:00.424) 0:00:29.993 ************ 2025-05-05 00:42:45.977263 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e) 2025-05-05 00:42:45.977966 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e) 2025-05-05 00:42:45.978083 | orchestrator | 2025-05-05 00:42:45.978307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:42:45.978873 | orchestrator | Monday 05 May 2025 00:42:45 +0000 (0:00:00.417) 0:00:30.410 ************ 2025-05-05 00:42:46.343811 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-05 00:42:46.344378 | orchestrator | 2025-05-05 00:42:46.345713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:46.346100 | orchestrator | Monday 05 May 2025 00:42:46 +0000 (0:00:00.363) 0:00:30.774 ************ 2025-05-05 00:42:46.790405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-05 00:42:46.790656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-05 00:42:46.790688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-05 00:42:46.790763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-05 00:42:46.790786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-05 00:42:46.791193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-05 00:42:46.792065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-05 00:42:46.792420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-05 00:42:46.795000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-05 00:42:46.795367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-05 00:42:46.795395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-05 00:42:46.795415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-05 00:42:46.796494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-05 00:42:46.796839 | orchestrator | 2025-05-05 00:42:46.797405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:46.797490 | orchestrator | Monday 05 May 2025 00:42:46 +0000 (0:00:00.448) 0:00:31.223 ************ 2025-05-05 00:42:46.988627 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:47.409793 | orchestrator | 2025-05-05 00:42:47.410844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:47.410909 | orchestrator | Monday 05 May 2025 00:42:46 +0000 (0:00:00.198) 0:00:31.421 ************ 2025-05-05 00:42:47.410944 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:47.411103 | orchestrator | 2025-05-05 00:42:47.411134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:47.412015 | orchestrator | Monday 05 May 2025 00:42:47 +0000 (0:00:00.420) 0:00:31.842 ************ 2025-05-05 00:42:47.606459 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:47.609371 | orchestrator | 2025-05-05 00:42:47.609818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:47.609841 | orchestrator | Monday 05 May 2025 00:42:47 +0000 (0:00:00.195) 0:00:32.038 ************ 2025-05-05 00:42:47.805494 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:47.805785 | orchestrator | 2025-05-05 00:42:47.806473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:47.807049 | orchestrator | Monday 05 May 2025 00:42:47 +0000 (0:00:00.200) 0:00:32.238 ************ 2025-05-05 00:42:47.995448 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:47.996169 | orchestrator | 2025-05-05 00:42:47.996902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:47.997378 | orchestrator | Monday 05 May 2025 00:42:47 +0000 (0:00:00.190) 0:00:32.429 ************ 2025-05-05 00:42:48.197631 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:48.200150 | orchestrator | 2025-05-05 00:42:48.413110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:48.413223 | orchestrator | Monday 05 May 2025 00:42:48 +0000 (0:00:00.199) 0:00:32.629 ************ 2025-05-05 00:42:48.413260 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:48.607479 | orchestrator | 2025-05-05 00:42:48.607590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:48.607625 | orchestrator | Monday 05 May 2025 00:42:48 +0000 (0:00:00.214) 0:00:32.843 ************ 2025-05-05 00:42:48.607669 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:48.607786 | orchestrator | 2025-05-05 00:42:48.608570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:48.609109 | orchestrator | Monday 05 May 2025 00:42:48 +0000 (0:00:00.196) 0:00:33.040 ************ 2025-05-05 00:42:49.230287 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-05 00:42:49.231008 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-05 00:42:49.231814 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-05 00:42:49.234803 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-05 00:42:49.427204 | orchestrator | 2025-05-05 00:42:49.427316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:49.427334 | orchestrator | Monday 05 May 2025 00:42:49 +0000 (0:00:00.622) 0:00:33.663 ************ 2025-05-05 00:42:49.427365 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:49.427875 | orchestrator | 2025-05-05 00:42:49.428798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:49.429791 | orchestrator | Monday 05 May 2025 00:42:49 +0000 (0:00:00.196) 0:00:33.859 ************ 2025-05-05 00:42:49.626935 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:49.630200 | orchestrator | 2025-05-05 00:42:49.841182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:49.841324 | orchestrator | Monday 05 May 2025 00:42:49 +0000 (0:00:00.199) 0:00:34.059 ************ 2025-05-05 00:42:49.841360 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:49.842597 | orchestrator | 2025-05-05 00:42:49.843812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:42:49.844746 | orchestrator | Monday 05 May 2025 00:42:49 +0000 (0:00:00.215) 0:00:34.274 ************ 2025-05-05 00:42:50.473613 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:50.474266 | orchestrator | 2025-05-05 00:42:50.475008 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-05 00:42:50.477504 | orchestrator | Monday 05 May 2025 00:42:50 +0000 (0:00:00.630) 0:00:34.905 ************ 2025-05-05 00:42:50.614870 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:50.615253 | orchestrator | 2025-05-05 00:42:50.616165 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-05 00:42:50.617070 | orchestrator | Monday 05 May 2025 00:42:50 +0000 (0:00:00.142) 0:00:35.048 ************ 2025-05-05 00:42:50.847323 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}}) 2025-05-05 00:42:50.848201 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}}) 2025-05-05 00:42:50.848297 | orchestrator | 2025-05-05 00:42:50.848321 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-05 00:42:50.848637 | orchestrator | Monday 05 May 2025 00:42:50 +0000 (0:00:00.232) 0:00:35.281 ************ 2025-05-05 00:42:52.689422 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}) 2025-05-05 00:42:52.689657 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}) 2025-05-05 00:42:52.689687 | orchestrator | 2025-05-05 00:42:52.691847 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-05 00:42:52.852158 | orchestrator | Monday 05 May 2025 00:42:52 +0000 (0:00:01.838) 0:00:37.120 ************ 2025-05-05 00:42:52.852290 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:52.853560 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:52.854869 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:52.856178 | orchestrator | 2025-05-05 00:42:52.856842 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-05 00:42:52.858096 | orchestrator | Monday 05 May 2025 00:42:52 +0000 (0:00:00.163) 0:00:37.283 ************ 2025-05-05 00:42:54.158298 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}) 2025-05-05 00:42:54.158774 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}) 2025-05-05 00:42:54.160303 | orchestrator | 2025-05-05 00:42:54.161684 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-05 00:42:54.162387 | orchestrator | Monday 05 May 2025 00:42:54 +0000 (0:00:01.305) 0:00:38.589 ************ 2025-05-05 00:42:54.320162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:54.320788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:54.321637 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:54.322612 | orchestrator | 2025-05-05 00:42:54.323721 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-05 00:42:54.324788 | orchestrator | Monday 05 May 2025 00:42:54 +0000 (0:00:00.163) 0:00:38.752 ************ 2025-05-05 00:42:54.458010 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:54.459187 | orchestrator | 2025-05-05 00:42:54.459370 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-05 00:42:54.460367 | orchestrator | Monday 05 May 2025 00:42:54 +0000 (0:00:00.137) 0:00:38.890 ************ 2025-05-05 00:42:54.637091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:54.637302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:54.637919 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:54.638509 | orchestrator | 2025-05-05 00:42:54.638948 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-05 00:42:54.641938 | orchestrator | Monday 05 May 2025 00:42:54 +0000 (0:00:00.179) 0:00:39.070 ************ 2025-05-05 00:42:54.969517 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:54.969938 | orchestrator | 2025-05-05 00:42:54.971165 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-05 00:42:54.971425 | orchestrator | Monday 05 May 2025 00:42:54 +0000 (0:00:00.331) 0:00:39.401 ************ 2025-05-05 00:42:55.132680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:55.133017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:55.133782 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:55.134570 | orchestrator | 2025-05-05 00:42:55.135539 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-05 00:42:55.137414 | orchestrator | Monday 05 May 2025 00:42:55 +0000 (0:00:00.163) 0:00:39.565 ************ 2025-05-05 00:42:55.275175 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:55.275437 | orchestrator | 2025-05-05 00:42:55.275887 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-05 00:42:55.276741 | orchestrator | Monday 05 May 2025 00:42:55 +0000 (0:00:00.142) 0:00:39.708 ************ 2025-05-05 00:42:55.451160 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:55.451643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:55.451688 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:55.452240 | orchestrator | 2025-05-05 00:42:55.452872 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-05 00:42:55.453529 | orchestrator | Monday 05 May 2025 00:42:55 +0000 (0:00:00.175) 0:00:39.883 ************ 2025-05-05 00:42:55.592188 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:42:55.592648 | orchestrator | 2025-05-05 00:42:55.593454 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-05 00:42:55.593825 | orchestrator | Monday 05 May 2025 00:42:55 +0000 (0:00:00.141) 0:00:40.025 ************ 2025-05-05 00:42:55.790330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:55.790929 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:55.792135 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:55.792632 | orchestrator | 2025-05-05 00:42:55.793035 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-05 00:42:55.793479 | orchestrator | Monday 05 May 2025 00:42:55 +0000 (0:00:00.192) 0:00:40.217 ************ 2025-05-05 00:42:55.949511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:55.949765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:55.950128 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:55.951083 | orchestrator | 2025-05-05 00:42:55.951121 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-05 00:42:55.951500 | orchestrator | Monday 05 May 2025 00:42:55 +0000 (0:00:00.165) 0:00:40.383 ************ 2025-05-05 00:42:56.112941 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:42:56.113863 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:42:56.114917 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:56.115364 | orchestrator | 2025-05-05 00:42:56.115804 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-05 00:42:56.116259 | orchestrator | Monday 05 May 2025 00:42:56 +0000 (0:00:00.162) 0:00:40.545 ************ 2025-05-05 00:42:56.258668 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:56.259036 | orchestrator | 2025-05-05 00:42:56.259074 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-05 00:42:56.259912 | orchestrator | Monday 05 May 2025 00:42:56 +0000 (0:00:00.145) 0:00:40.691 ************ 2025-05-05 00:42:56.404813 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:56.405040 | orchestrator | 2025-05-05 00:42:56.406392 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-05 00:42:56.406650 | orchestrator | Monday 05 May 2025 00:42:56 +0000 (0:00:00.146) 0:00:40.838 ************ 2025-05-05 00:42:56.548054 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:56.548760 | orchestrator | 2025-05-05 00:42:56.549284 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-05 00:42:56.550291 | orchestrator | Monday 05 May 2025 00:42:56 +0000 (0:00:00.141) 0:00:40.979 ************ 2025-05-05 00:42:56.880972 | orchestrator | ok: [testbed-node-4] => { 2025-05-05 00:42:56.881680 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-05 00:42:56.882762 | orchestrator | } 2025-05-05 00:42:56.883823 | orchestrator | 2025-05-05 00:42:56.884286 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-05 00:42:56.884966 | orchestrator | Monday 05 May 2025 00:42:56 +0000 (0:00:00.332) 0:00:41.311 ************ 2025-05-05 00:42:57.024574 | orchestrator | ok: [testbed-node-4] => { 2025-05-05 00:42:57.024888 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-05 00:42:57.026762 | orchestrator | } 2025-05-05 00:42:57.026833 | orchestrator | 2025-05-05 00:42:57.029822 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-05 00:42:57.170840 | orchestrator | Monday 05 May 2025 00:42:57 +0000 (0:00:00.145) 0:00:41.457 ************ 2025-05-05 00:42:57.171006 | orchestrator | ok: [testbed-node-4] => { 2025-05-05 00:42:57.172288 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-05 00:42:57.172323 | orchestrator | } 2025-05-05 00:42:57.172379 | orchestrator | 2025-05-05 00:42:57.172401 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-05 00:42:57.172894 | orchestrator | Monday 05 May 2025 00:42:57 +0000 (0:00:00.143) 0:00:41.600 ************ 2025-05-05 00:42:57.682856 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:42:57.683168 | orchestrator | 2025-05-05 00:42:57.683580 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-05 00:42:57.684248 | orchestrator | Monday 05 May 2025 00:42:57 +0000 (0:00:00.513) 0:00:42.114 ************ 2025-05-05 00:42:58.198973 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:42:58.199340 | orchestrator | 2025-05-05 00:42:58.200017 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-05 00:42:58.200940 | orchestrator | Monday 05 May 2025 00:42:58 +0000 (0:00:00.515) 0:00:42.630 ************ 2025-05-05 00:42:58.706916 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:42:58.707390 | orchestrator | 2025-05-05 00:42:58.707441 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-05 00:42:58.707752 | orchestrator | Monday 05 May 2025 00:42:58 +0000 (0:00:00.510) 0:00:43.140 ************ 2025-05-05 00:42:58.866449 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:42:58.866932 | orchestrator | 2025-05-05 00:42:58.867896 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-05 00:42:58.869434 | orchestrator | Monday 05 May 2025 00:42:58 +0000 (0:00:00.157) 0:00:43.298 ************ 2025-05-05 00:42:58.971057 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:58.971822 | orchestrator | 2025-05-05 00:42:58.972159 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-05 00:42:58.972806 | orchestrator | Monday 05 May 2025 00:42:58 +0000 (0:00:00.106) 0:00:43.404 ************ 2025-05-05 00:42:59.084045 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:59.084261 | orchestrator | 2025-05-05 00:42:59.084963 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-05 00:42:59.085727 | orchestrator | Monday 05 May 2025 00:42:59 +0000 (0:00:00.112) 0:00:43.516 ************ 2025-05-05 00:42:59.221595 | orchestrator | ok: [testbed-node-4] => { 2025-05-05 00:42:59.221859 | orchestrator |  "vgs_report": { 2025-05-05 00:42:59.221958 | orchestrator |  "vg": [] 2025-05-05 00:42:59.223386 | orchestrator |  } 2025-05-05 00:42:59.224092 | orchestrator | } 2025-05-05 00:42:59.224895 | orchestrator | 2025-05-05 00:42:59.225755 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-05 00:42:59.226975 | orchestrator | Monday 05 May 2025 00:42:59 +0000 (0:00:00.137) 0:00:43.654 ************ 2025-05-05 00:42:59.371050 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:59.371250 | orchestrator | 2025-05-05 00:42:59.375016 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-05 00:42:59.697484 | orchestrator | Monday 05 May 2025 00:42:59 +0000 (0:00:00.145) 0:00:43.799 ************ 2025-05-05 00:42:59.697640 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:59.697904 | orchestrator | 2025-05-05 00:42:59.697940 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-05 00:42:59.698498 | orchestrator | Monday 05 May 2025 00:42:59 +0000 (0:00:00.330) 0:00:44.130 ************ 2025-05-05 00:42:59.843634 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:59.843901 | orchestrator | 2025-05-05 00:42:59.844301 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-05 00:42:59.845528 | orchestrator | Monday 05 May 2025 00:42:59 +0000 (0:00:00.145) 0:00:44.275 ************ 2025-05-05 00:42:59.983164 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:42:59.983379 | orchestrator | 2025-05-05 00:42:59.983686 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-05 00:42:59.984708 | orchestrator | Monday 05 May 2025 00:42:59 +0000 (0:00:00.140) 0:00:44.416 ************ 2025-05-05 00:43:00.124091 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:00.124665 | orchestrator | 2025-05-05 00:43:00.124774 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-05 00:43:00.125669 | orchestrator | Monday 05 May 2025 00:43:00 +0000 (0:00:00.140) 0:00:44.556 ************ 2025-05-05 00:43:00.269065 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:00.269563 | orchestrator | 2025-05-05 00:43:00.270417 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-05 00:43:00.270898 | orchestrator | Monday 05 May 2025 00:43:00 +0000 (0:00:00.145) 0:00:44.702 ************ 2025-05-05 00:43:00.409146 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:00.410099 | orchestrator | 2025-05-05 00:43:00.410408 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-05 00:43:00.411275 | orchestrator | Monday 05 May 2025 00:43:00 +0000 (0:00:00.139) 0:00:44.841 ************ 2025-05-05 00:43:00.534922 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:00.535814 | orchestrator | 2025-05-05 00:43:00.538879 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-05 00:43:00.543361 | orchestrator | Monday 05 May 2025 00:43:00 +0000 (0:00:00.124) 0:00:44.966 ************ 2025-05-05 00:43:00.682631 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:00.682945 | orchestrator | 2025-05-05 00:43:00.684327 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-05 00:43:00.687279 | orchestrator | Monday 05 May 2025 00:43:00 +0000 (0:00:00.148) 0:00:45.114 ************ 2025-05-05 00:43:00.824266 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:00.824938 | orchestrator | 2025-05-05 00:43:00.827967 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-05 00:43:00.828288 | orchestrator | Monday 05 May 2025 00:43:00 +0000 (0:00:00.141) 0:00:45.256 ************ 2025-05-05 00:43:00.978849 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:00.979053 | orchestrator | 2025-05-05 00:43:00.979941 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-05 00:43:00.980939 | orchestrator | Monday 05 May 2025 00:43:00 +0000 (0:00:00.156) 0:00:45.412 ************ 2025-05-05 00:43:01.117338 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:01.117796 | orchestrator | 2025-05-05 00:43:01.118795 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-05 00:43:01.119914 | orchestrator | Monday 05 May 2025 00:43:01 +0000 (0:00:00.137) 0:00:45.550 ************ 2025-05-05 00:43:01.263404 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:01.263977 | orchestrator | 2025-05-05 00:43:01.265297 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-05 00:43:01.265984 | orchestrator | Monday 05 May 2025 00:43:01 +0000 (0:00:00.145) 0:00:45.695 ************ 2025-05-05 00:43:01.615019 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:01.616114 | orchestrator | 2025-05-05 00:43:01.617293 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-05 00:43:01.620324 | orchestrator | Monday 05 May 2025 00:43:01 +0000 (0:00:00.352) 0:00:46.047 ************ 2025-05-05 00:43:01.791841 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:01.792965 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:01.795197 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:01.796412 | orchestrator | 2025-05-05 00:43:01.797942 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-05 00:43:01.798920 | orchestrator | Monday 05 May 2025 00:43:01 +0000 (0:00:00.176) 0:00:46.224 ************ 2025-05-05 00:43:01.970345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:01.970963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:01.971008 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:02.136977 | orchestrator | 2025-05-05 00:43:02.137095 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-05 00:43:02.137115 | orchestrator | Monday 05 May 2025 00:43:01 +0000 (0:00:00.176) 0:00:46.401 ************ 2025-05-05 00:43:02.137145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:02.137210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:02.137936 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:02.138243 | orchestrator | 2025-05-05 00:43:02.138812 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-05 00:43:02.139476 | orchestrator | Monday 05 May 2025 00:43:02 +0000 (0:00:00.168) 0:00:46.570 ************ 2025-05-05 00:43:02.289829 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:02.290176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:02.290275 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:02.291063 | orchestrator | 2025-05-05 00:43:02.298977 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-05 00:43:02.302750 | orchestrator | Monday 05 May 2025 00:43:02 +0000 (0:00:00.152) 0:00:46.723 ************ 2025-05-05 00:43:02.464483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:02.465151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:02.466552 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:02.467629 | orchestrator | 2025-05-05 00:43:02.467677 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-05 00:43:02.468886 | orchestrator | Monday 05 May 2025 00:43:02 +0000 (0:00:00.174) 0:00:46.897 ************ 2025-05-05 00:43:02.645599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:02.646491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:02.647814 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:02.651254 | orchestrator | 2025-05-05 00:43:02.818889 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-05 00:43:02.819012 | orchestrator | Monday 05 May 2025 00:43:02 +0000 (0:00:00.180) 0:00:47.078 ************ 2025-05-05 00:43:02.819049 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:02.820634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:02.821684 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:02.824896 | orchestrator | 2025-05-05 00:43:02.826109 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-05 00:43:02.827511 | orchestrator | Monday 05 May 2025 00:43:02 +0000 (0:00:00.172) 0:00:47.250 ************ 2025-05-05 00:43:02.988580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:02.989608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:02.990645 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:02.991918 | orchestrator | 2025-05-05 00:43:02.992490 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-05 00:43:02.993307 | orchestrator | Monday 05 May 2025 00:43:02 +0000 (0:00:00.169) 0:00:47.420 ************ 2025-05-05 00:43:03.496797 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:43:03.497783 | orchestrator | 2025-05-05 00:43:03.501206 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-05 00:43:04.018399 | orchestrator | Monday 05 May 2025 00:43:03 +0000 (0:00:00.508) 0:00:47.928 ************ 2025-05-05 00:43:04.018539 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:43:04.019160 | orchestrator | 2025-05-05 00:43:04.020448 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-05 00:43:04.021993 | orchestrator | Monday 05 May 2025 00:43:04 +0000 (0:00:00.520) 0:00:48.449 ************ 2025-05-05 00:43:04.367611 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:43:04.368772 | orchestrator | 2025-05-05 00:43:04.369399 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-05 00:43:04.370803 | orchestrator | Monday 05 May 2025 00:43:04 +0000 (0:00:00.351) 0:00:48.800 ************ 2025-05-05 00:43:04.556224 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'vg_name': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}) 2025-05-05 00:43:04.719399 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'vg_name': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}) 2025-05-05 00:43:04.719553 | orchestrator | 2025-05-05 00:43:04.719583 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-05 00:43:04.719609 | orchestrator | Monday 05 May 2025 00:43:04 +0000 (0:00:00.184) 0:00:48.984 ************ 2025-05-05 00:43:04.719655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:04.720465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:04.721633 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:04.724064 | orchestrator | 2025-05-05 00:43:04.899661 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-05 00:43:04.899946 | orchestrator | Monday 05 May 2025 00:43:04 +0000 (0:00:00.167) 0:00:49.152 ************ 2025-05-05 00:43:04.900008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:04.900949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:04.903232 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:04.907016 | orchestrator | 2025-05-05 00:43:04.908048 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-05 00:43:04.908555 | orchestrator | Monday 05 May 2025 00:43:04 +0000 (0:00:00.179) 0:00:49.332 ************ 2025-05-05 00:43:05.070732 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'})  2025-05-05 00:43:05.071720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'})  2025-05-05 00:43:05.073240 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:05.073767 | orchestrator | 2025-05-05 00:43:05.074498 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-05 00:43:05.075242 | orchestrator | Monday 05 May 2025 00:43:05 +0000 (0:00:00.171) 0:00:49.503 ************ 2025-05-05 00:43:05.953992 | orchestrator | ok: [testbed-node-4] => { 2025-05-05 00:43:05.954412 | orchestrator |  "lvm_report": { 2025-05-05 00:43:05.956567 | orchestrator |  "lv": [ 2025-05-05 00:43:05.956897 | orchestrator |  { 2025-05-05 00:43:05.961530 | orchestrator |  "lv_name": "osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f", 2025-05-05 00:43:05.962306 | orchestrator |  "vg_name": "ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f" 2025-05-05 00:43:05.963015 | orchestrator |  }, 2025-05-05 00:43:05.963233 | orchestrator |  { 2025-05-05 00:43:05.964073 | orchestrator |  "lv_name": "osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3", 2025-05-05 00:43:05.965270 | orchestrator |  "vg_name": "ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3" 2025-05-05 00:43:05.965613 | orchestrator |  } 2025-05-05 00:43:05.968549 | orchestrator |  ], 2025-05-05 00:43:05.969061 | orchestrator |  "pv": [ 2025-05-05 00:43:05.969861 | orchestrator |  { 2025-05-05 00:43:05.972741 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-05 00:43:05.972859 | orchestrator |  "vg_name": "ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f" 2025-05-05 00:43:05.975045 | orchestrator |  }, 2025-05-05 00:43:05.975657 | orchestrator |  { 2025-05-05 00:43:05.977425 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-05 00:43:05.978270 | orchestrator |  "vg_name": "ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3" 2025-05-05 00:43:05.978420 | orchestrator |  } 2025-05-05 00:43:05.979376 | orchestrator |  ] 2025-05-05 00:43:05.979816 | orchestrator |  } 2025-05-05 00:43:05.980271 | orchestrator | } 2025-05-05 00:43:05.980911 | orchestrator | 2025-05-05 00:43:05.981163 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-05 00:43:05.981926 | orchestrator | 2025-05-05 00:43:05.982112 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-05 00:43:05.982553 | orchestrator | Monday 05 May 2025 00:43:05 +0000 (0:00:00.883) 0:00:50.386 ************ 2025-05-05 00:43:06.212823 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-05 00:43:06.213027 | orchestrator | 2025-05-05 00:43:06.213474 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-05 00:43:06.214088 | orchestrator | Monday 05 May 2025 00:43:06 +0000 (0:00:00.256) 0:00:50.643 ************ 2025-05-05 00:43:06.449048 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:06.449287 | orchestrator | 2025-05-05 00:43:06.451286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:06.451486 | orchestrator | Monday 05 May 2025 00:43:06 +0000 (0:00:00.237) 0:00:50.880 ************ 2025-05-05 00:43:06.906307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-05 00:43:06.907400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-05 00:43:06.908146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-05 00:43:06.909266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-05 00:43:06.910879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-05 00:43:06.913614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-05 00:43:06.913650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-05 00:43:06.915429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-05 00:43:06.915993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-05 00:43:06.917348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-05 00:43:06.918442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-05 00:43:06.919451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-05 00:43:06.919878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-05 00:43:06.920889 | orchestrator | 2025-05-05 00:43:06.921347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:06.922160 | orchestrator | Monday 05 May 2025 00:43:06 +0000 (0:00:00.458) 0:00:51.338 ************ 2025-05-05 00:43:07.107029 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:07.108457 | orchestrator | 2025-05-05 00:43:07.108980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:07.110244 | orchestrator | Monday 05 May 2025 00:43:07 +0000 (0:00:00.200) 0:00:51.538 ************ 2025-05-05 00:43:07.309250 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:07.310374 | orchestrator | 2025-05-05 00:43:07.310660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:07.312136 | orchestrator | Monday 05 May 2025 00:43:07 +0000 (0:00:00.202) 0:00:51.741 ************ 2025-05-05 00:43:07.522391 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:07.525315 | orchestrator | 2025-05-05 00:43:07.525365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:07.710657 | orchestrator | Monday 05 May 2025 00:43:07 +0000 (0:00:00.213) 0:00:51.954 ************ 2025-05-05 00:43:07.710885 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:07.712441 | orchestrator | 2025-05-05 00:43:08.293890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:08.294064 | orchestrator | Monday 05 May 2025 00:43:07 +0000 (0:00:00.188) 0:00:52.143 ************ 2025-05-05 00:43:08.294105 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:08.294527 | orchestrator | 2025-05-05 00:43:08.296596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:08.505515 | orchestrator | Monday 05 May 2025 00:43:08 +0000 (0:00:00.581) 0:00:52.725 ************ 2025-05-05 00:43:08.506499 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:08.506886 | orchestrator | 2025-05-05 00:43:08.506919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:08.506940 | orchestrator | Monday 05 May 2025 00:43:08 +0000 (0:00:00.211) 0:00:52.937 ************ 2025-05-05 00:43:08.702319 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:08.703343 | orchestrator | 2025-05-05 00:43:08.703388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:08.703921 | orchestrator | Monday 05 May 2025 00:43:08 +0000 (0:00:00.196) 0:00:53.134 ************ 2025-05-05 00:43:08.930733 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:08.931522 | orchestrator | 2025-05-05 00:43:08.931565 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:08.931969 | orchestrator | Monday 05 May 2025 00:43:08 +0000 (0:00:00.228) 0:00:53.363 ************ 2025-05-05 00:43:09.381063 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e) 2025-05-05 00:43:09.381438 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e) 2025-05-05 00:43:09.382099 | orchestrator | 2025-05-05 00:43:09.382551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:09.383025 | orchestrator | Monday 05 May 2025 00:43:09 +0000 (0:00:00.449) 0:00:53.812 ************ 2025-05-05 00:43:09.810669 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370) 2025-05-05 00:43:09.811079 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370) 2025-05-05 00:43:09.811126 | orchestrator | 2025-05-05 00:43:09.811917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:09.816326 | orchestrator | Monday 05 May 2025 00:43:09 +0000 (0:00:00.429) 0:00:54.242 ************ 2025-05-05 00:43:10.233931 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10) 2025-05-05 00:43:10.234203 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10) 2025-05-05 00:43:10.235819 | orchestrator | 2025-05-05 00:43:10.235869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:10.236122 | orchestrator | Monday 05 May 2025 00:43:10 +0000 (0:00:00.423) 0:00:54.666 ************ 2025-05-05 00:43:10.666919 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d) 2025-05-05 00:43:10.667326 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d) 2025-05-05 00:43:10.668126 | orchestrator | 2025-05-05 00:43:10.669473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-05 00:43:10.670315 | orchestrator | Monday 05 May 2025 00:43:10 +0000 (0:00:00.433) 0:00:55.100 ************ 2025-05-05 00:43:11.000976 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-05 00:43:11.001769 | orchestrator | 2025-05-05 00:43:11.002659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:11.006309 | orchestrator | Monday 05 May 2025 00:43:10 +0000 (0:00:00.332) 0:00:55.433 ************ 2025-05-05 00:43:11.641285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-05 00:43:11.641973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-05 00:43:11.645779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-05 00:43:11.646800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-05 00:43:11.646854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-05 00:43:11.647294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-05 00:43:11.651767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-05 00:43:11.653476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-05 00:43:11.653596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-05 00:43:11.653624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-05 00:43:11.654617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-05 00:43:11.655295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-05 00:43:11.655997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-05 00:43:11.656793 | orchestrator | 2025-05-05 00:43:11.657380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:11.658178 | orchestrator | Monday 05 May 2025 00:43:11 +0000 (0:00:00.638) 0:00:56.071 ************ 2025-05-05 00:43:11.855363 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:11.856434 | orchestrator | 2025-05-05 00:43:11.857360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:11.860576 | orchestrator | Monday 05 May 2025 00:43:11 +0000 (0:00:00.216) 0:00:56.287 ************ 2025-05-05 00:43:12.068452 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:12.069044 | orchestrator | 2025-05-05 00:43:12.069766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:12.071873 | orchestrator | Monday 05 May 2025 00:43:12 +0000 (0:00:00.211) 0:00:56.499 ************ 2025-05-05 00:43:12.261431 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:12.261638 | orchestrator | 2025-05-05 00:43:12.262401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:12.262853 | orchestrator | Monday 05 May 2025 00:43:12 +0000 (0:00:00.194) 0:00:56.694 ************ 2025-05-05 00:43:12.468058 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:12.468600 | orchestrator | 2025-05-05 00:43:12.469287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:12.469961 | orchestrator | Monday 05 May 2025 00:43:12 +0000 (0:00:00.205) 0:00:56.899 ************ 2025-05-05 00:43:12.669628 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:12.670169 | orchestrator | 2025-05-05 00:43:12.670515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:12.671315 | orchestrator | Monday 05 May 2025 00:43:12 +0000 (0:00:00.202) 0:00:57.102 ************ 2025-05-05 00:43:12.861301 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:12.862679 | orchestrator | 2025-05-05 00:43:12.863229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:12.863265 | orchestrator | Monday 05 May 2025 00:43:12 +0000 (0:00:00.191) 0:00:57.293 ************ 2025-05-05 00:43:13.054821 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:13.055025 | orchestrator | 2025-05-05 00:43:13.056165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:13.056919 | orchestrator | Monday 05 May 2025 00:43:13 +0000 (0:00:00.193) 0:00:57.486 ************ 2025-05-05 00:43:13.264019 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:13.264266 | orchestrator | 2025-05-05 00:43:13.264810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:13.265407 | orchestrator | Monday 05 May 2025 00:43:13 +0000 (0:00:00.209) 0:00:57.696 ************ 2025-05-05 00:43:14.103337 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-05 00:43:14.103842 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-05 00:43:14.104577 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-05 00:43:14.105476 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-05 00:43:14.108243 | orchestrator | 2025-05-05 00:43:14.311105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:14.311300 | orchestrator | Monday 05 May 2025 00:43:14 +0000 (0:00:00.838) 0:00:58.535 ************ 2025-05-05 00:43:14.311338 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:14.311418 | orchestrator | 2025-05-05 00:43:14.311982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:14.312835 | orchestrator | Monday 05 May 2025 00:43:14 +0000 (0:00:00.207) 0:00:58.743 ************ 2025-05-05 00:43:14.905988 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:14.907138 | orchestrator | 2025-05-05 00:43:14.907904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:14.910562 | orchestrator | Monday 05 May 2025 00:43:14 +0000 (0:00:00.594) 0:00:59.338 ************ 2025-05-05 00:43:15.103929 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:15.105590 | orchestrator | 2025-05-05 00:43:15.105745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-05 00:43:15.105824 | orchestrator | Monday 05 May 2025 00:43:15 +0000 (0:00:00.197) 0:00:59.535 ************ 2025-05-05 00:43:15.311563 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:15.312395 | orchestrator | 2025-05-05 00:43:15.313141 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-05 00:43:15.314276 | orchestrator | Monday 05 May 2025 00:43:15 +0000 (0:00:00.206) 0:00:59.742 ************ 2025-05-05 00:43:15.447913 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:15.448499 | orchestrator | 2025-05-05 00:43:15.449025 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-05 00:43:15.449683 | orchestrator | Monday 05 May 2025 00:43:15 +0000 (0:00:00.138) 0:00:59.880 ************ 2025-05-05 00:43:15.664585 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ded391-41bb-58c4-acef-51f998367f5e'}}) 2025-05-05 00:43:15.664843 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}}) 2025-05-05 00:43:15.665292 | orchestrator | 2025-05-05 00:43:15.665872 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-05 00:43:15.666837 | orchestrator | Monday 05 May 2025 00:43:15 +0000 (0:00:00.216) 0:01:00.096 ************ 2025-05-05 00:43:17.423295 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'}) 2025-05-05 00:43:17.423477 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}) 2025-05-05 00:43:17.424662 | orchestrator | 2025-05-05 00:43:17.425161 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-05 00:43:17.426604 | orchestrator | Monday 05 May 2025 00:43:17 +0000 (0:00:01.756) 0:01:01.853 ************ 2025-05-05 00:43:17.584029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:17.584499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:17.585476 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:17.586353 | orchestrator | 2025-05-05 00:43:17.586756 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-05 00:43:17.587270 | orchestrator | Monday 05 May 2025 00:43:17 +0000 (0:00:00.162) 0:01:02.015 ************ 2025-05-05 00:43:18.859486 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'}) 2025-05-05 00:43:18.860256 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}) 2025-05-05 00:43:18.860870 | orchestrator | 2025-05-05 00:43:18.862397 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-05 00:43:18.864321 | orchestrator | Monday 05 May 2025 00:43:18 +0000 (0:00:01.275) 0:01:03.290 ************ 2025-05-05 00:43:19.033449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:19.033624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:19.034522 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:19.035596 | orchestrator | 2025-05-05 00:43:19.036270 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-05 00:43:19.036950 | orchestrator | Monday 05 May 2025 00:43:19 +0000 (0:00:00.174) 0:01:03.465 ************ 2025-05-05 00:43:19.351392 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:19.352141 | orchestrator | 2025-05-05 00:43:19.352925 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-05 00:43:19.354231 | orchestrator | Monday 05 May 2025 00:43:19 +0000 (0:00:00.318) 0:01:03.783 ************ 2025-05-05 00:43:19.533642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:19.534487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:19.535388 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:19.536312 | orchestrator | 2025-05-05 00:43:19.537090 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-05 00:43:19.538328 | orchestrator | Monday 05 May 2025 00:43:19 +0000 (0:00:00.182) 0:01:03.965 ************ 2025-05-05 00:43:19.677187 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:19.677453 | orchestrator | 2025-05-05 00:43:19.678894 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-05 00:43:19.679153 | orchestrator | Monday 05 May 2025 00:43:19 +0000 (0:00:00.144) 0:01:04.109 ************ 2025-05-05 00:43:19.846120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:19.846267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:19.846823 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:19.847432 | orchestrator | 2025-05-05 00:43:19.847896 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-05 00:43:19.849272 | orchestrator | Monday 05 May 2025 00:43:19 +0000 (0:00:00.168) 0:01:04.278 ************ 2025-05-05 00:43:19.974066 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:19.974622 | orchestrator | 2025-05-05 00:43:19.975288 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-05 00:43:19.976198 | orchestrator | Monday 05 May 2025 00:43:19 +0000 (0:00:00.128) 0:01:04.406 ************ 2025-05-05 00:43:20.149973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:20.151275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:20.151441 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:20.152289 | orchestrator | 2025-05-05 00:43:20.154259 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-05 00:43:20.285395 | orchestrator | Monday 05 May 2025 00:43:20 +0000 (0:00:00.175) 0:01:04.582 ************ 2025-05-05 00:43:20.285560 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:20.286596 | orchestrator | 2025-05-05 00:43:20.286635 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-05 00:43:20.287753 | orchestrator | Monday 05 May 2025 00:43:20 +0000 (0:00:00.135) 0:01:04.717 ************ 2025-05-05 00:43:20.447515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:20.447803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:20.448216 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:20.449639 | orchestrator | 2025-05-05 00:43:20.452473 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-05 00:43:20.619890 | orchestrator | Monday 05 May 2025 00:43:20 +0000 (0:00:00.161) 0:01:04.879 ************ 2025-05-05 00:43:20.620025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:20.620476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:20.621378 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:20.622623 | orchestrator | 2025-05-05 00:43:20.623217 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-05 00:43:20.623859 | orchestrator | Monday 05 May 2025 00:43:20 +0000 (0:00:00.170) 0:01:05.050 ************ 2025-05-05 00:43:20.786519 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:20.786868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:20.787534 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:20.788773 | orchestrator | 2025-05-05 00:43:20.790233 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-05 00:43:20.790330 | orchestrator | Monday 05 May 2025 00:43:20 +0000 (0:00:00.168) 0:01:05.218 ************ 2025-05-05 00:43:20.921088 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:20.921925 | orchestrator | 2025-05-05 00:43:20.922499 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-05 00:43:20.924375 | orchestrator | Monday 05 May 2025 00:43:20 +0000 (0:00:00.132) 0:01:05.351 ************ 2025-05-05 00:43:21.246558 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:21.246778 | orchestrator | 2025-05-05 00:43:21.246812 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-05 00:43:21.246976 | orchestrator | Monday 05 May 2025 00:43:21 +0000 (0:00:00.325) 0:01:05.676 ************ 2025-05-05 00:43:21.397441 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:21.398173 | orchestrator | 2025-05-05 00:43:21.398956 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-05 00:43:21.399621 | orchestrator | Monday 05 May 2025 00:43:21 +0000 (0:00:00.152) 0:01:05.829 ************ 2025-05-05 00:43:21.546288 | orchestrator | ok: [testbed-node-5] => { 2025-05-05 00:43:21.547168 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-05 00:43:21.548012 | orchestrator | } 2025-05-05 00:43:21.550667 | orchestrator | 2025-05-05 00:43:21.689177 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-05 00:43:21.689288 | orchestrator | Monday 05 May 2025 00:43:21 +0000 (0:00:00.148) 0:01:05.977 ************ 2025-05-05 00:43:21.689323 | orchestrator | ok: [testbed-node-5] => { 2025-05-05 00:43:21.690330 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-05 00:43:21.690459 | orchestrator | } 2025-05-05 00:43:21.691347 | orchestrator | 2025-05-05 00:43:21.692190 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-05 00:43:21.692579 | orchestrator | Monday 05 May 2025 00:43:21 +0000 (0:00:00.143) 0:01:06.121 ************ 2025-05-05 00:43:21.848093 | orchestrator | ok: [testbed-node-5] => { 2025-05-05 00:43:21.848738 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-05 00:43:21.850354 | orchestrator | } 2025-05-05 00:43:21.850863 | orchestrator | 2025-05-05 00:43:21.852210 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-05 00:43:22.360417 | orchestrator | Monday 05 May 2025 00:43:21 +0000 (0:00:00.153) 0:01:06.275 ************ 2025-05-05 00:43:22.360600 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:22.360726 | orchestrator | 2025-05-05 00:43:22.362113 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-05 00:43:22.363245 | orchestrator | Monday 05 May 2025 00:43:22 +0000 (0:00:00.517) 0:01:06.792 ************ 2025-05-05 00:43:22.889202 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:22.889425 | orchestrator | 2025-05-05 00:43:22.890781 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-05 00:43:22.891523 | orchestrator | Monday 05 May 2025 00:43:22 +0000 (0:00:00.526) 0:01:07.318 ************ 2025-05-05 00:43:23.381603 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:23.382250 | orchestrator | 2025-05-05 00:43:23.382925 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-05 00:43:23.383772 | orchestrator | Monday 05 May 2025 00:43:23 +0000 (0:00:00.494) 0:01:07.813 ************ 2025-05-05 00:43:23.543307 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:23.544437 | orchestrator | 2025-05-05 00:43:23.544479 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-05 00:43:23.545250 | orchestrator | Monday 05 May 2025 00:43:23 +0000 (0:00:00.160) 0:01:07.973 ************ 2025-05-05 00:43:23.664223 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:23.664821 | orchestrator | 2025-05-05 00:43:23.666149 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-05 00:43:23.667285 | orchestrator | Monday 05 May 2025 00:43:23 +0000 (0:00:00.121) 0:01:08.095 ************ 2025-05-05 00:43:23.787126 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:23.787321 | orchestrator | 2025-05-05 00:43:23.788016 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-05 00:43:23.790365 | orchestrator | Monday 05 May 2025 00:43:23 +0000 (0:00:00.117) 0:01:08.212 ************ 2025-05-05 00:43:24.132763 | orchestrator | ok: [testbed-node-5] => { 2025-05-05 00:43:24.133565 | orchestrator |  "vgs_report": { 2025-05-05 00:43:24.134479 | orchestrator |  "vg": [] 2025-05-05 00:43:24.135523 | orchestrator |  } 2025-05-05 00:43:24.137553 | orchestrator | } 2025-05-05 00:43:24.138137 | orchestrator | 2025-05-05 00:43:24.138192 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-05 00:43:24.138889 | orchestrator | Monday 05 May 2025 00:43:24 +0000 (0:00:00.352) 0:01:08.564 ************ 2025-05-05 00:43:24.280964 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:24.281201 | orchestrator | 2025-05-05 00:43:24.282485 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-05 00:43:24.283923 | orchestrator | Monday 05 May 2025 00:43:24 +0000 (0:00:00.147) 0:01:08.712 ************ 2025-05-05 00:43:24.425538 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:24.426984 | orchestrator | 2025-05-05 00:43:24.427588 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-05 00:43:24.428588 | orchestrator | Monday 05 May 2025 00:43:24 +0000 (0:00:00.145) 0:01:08.857 ************ 2025-05-05 00:43:24.575878 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:24.576560 | orchestrator | 2025-05-05 00:43:24.577773 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-05 00:43:24.579892 | orchestrator | Monday 05 May 2025 00:43:24 +0000 (0:00:00.149) 0:01:09.007 ************ 2025-05-05 00:43:24.716963 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:24.717831 | orchestrator | 2025-05-05 00:43:24.717883 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-05 00:43:24.718737 | orchestrator | Monday 05 May 2025 00:43:24 +0000 (0:00:00.140) 0:01:09.148 ************ 2025-05-05 00:43:24.858279 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:24.858789 | orchestrator | 2025-05-05 00:43:24.859409 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-05 00:43:24.860335 | orchestrator | Monday 05 May 2025 00:43:24 +0000 (0:00:00.141) 0:01:09.290 ************ 2025-05-05 00:43:25.007881 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:25.009427 | orchestrator | 2025-05-05 00:43:25.010227 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-05 00:43:25.011397 | orchestrator | Monday 05 May 2025 00:43:25 +0000 (0:00:00.148) 0:01:09.438 ************ 2025-05-05 00:43:25.137768 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:25.138171 | orchestrator | 2025-05-05 00:43:25.138994 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-05 00:43:25.140529 | orchestrator | Monday 05 May 2025 00:43:25 +0000 (0:00:00.132) 0:01:09.570 ************ 2025-05-05 00:43:25.266974 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:25.268160 | orchestrator | 2025-05-05 00:43:25.268211 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-05 00:43:25.268836 | orchestrator | Monday 05 May 2025 00:43:25 +0000 (0:00:00.127) 0:01:09.698 ************ 2025-05-05 00:43:25.400657 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:25.402377 | orchestrator | 2025-05-05 00:43:25.402426 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-05 00:43:25.403624 | orchestrator | Monday 05 May 2025 00:43:25 +0000 (0:00:00.133) 0:01:09.832 ************ 2025-05-05 00:43:25.548905 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:25.549493 | orchestrator | 2025-05-05 00:43:25.550111 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-05 00:43:25.551500 | orchestrator | Monday 05 May 2025 00:43:25 +0000 (0:00:00.148) 0:01:09.980 ************ 2025-05-05 00:43:25.696194 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:25.696446 | orchestrator | 2025-05-05 00:43:25.697745 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-05 00:43:25.700050 | orchestrator | Monday 05 May 2025 00:43:25 +0000 (0:00:00.145) 0:01:10.126 ************ 2025-05-05 00:43:26.027624 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:26.028121 | orchestrator | 2025-05-05 00:43:26.029090 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-05 00:43:26.029869 | orchestrator | Monday 05 May 2025 00:43:26 +0000 (0:00:00.332) 0:01:10.459 ************ 2025-05-05 00:43:26.171447 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:26.172298 | orchestrator | 2025-05-05 00:43:26.173602 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-05 00:43:26.174334 | orchestrator | Monday 05 May 2025 00:43:26 +0000 (0:00:00.143) 0:01:10.603 ************ 2025-05-05 00:43:26.312768 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:26.312941 | orchestrator | 2025-05-05 00:43:26.313849 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-05 00:43:26.314534 | orchestrator | Monday 05 May 2025 00:43:26 +0000 (0:00:00.141) 0:01:10.744 ************ 2025-05-05 00:43:26.487785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:26.488524 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:26.489304 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:26.490197 | orchestrator | 2025-05-05 00:43:26.490754 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-05 00:43:26.491732 | orchestrator | Monday 05 May 2025 00:43:26 +0000 (0:00:00.174) 0:01:10.919 ************ 2025-05-05 00:43:26.659156 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:26.659441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:26.660540 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:26.661290 | orchestrator | 2025-05-05 00:43:26.661796 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-05 00:43:26.667371 | orchestrator | Monday 05 May 2025 00:43:26 +0000 (0:00:00.169) 0:01:11.088 ************ 2025-05-05 00:43:26.827394 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:26.827642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:26.828905 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:26.829877 | orchestrator | 2025-05-05 00:43:26.830680 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-05 00:43:26.831281 | orchestrator | Monday 05 May 2025 00:43:26 +0000 (0:00:00.169) 0:01:11.258 ************ 2025-05-05 00:43:26.993273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:26.993530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:26.995108 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:26.995985 | orchestrator | 2025-05-05 00:43:26.996724 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-05 00:43:26.997357 | orchestrator | Monday 05 May 2025 00:43:26 +0000 (0:00:00.166) 0:01:11.424 ************ 2025-05-05 00:43:27.165128 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:27.165723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:27.166423 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:27.166873 | orchestrator | 2025-05-05 00:43:27.168015 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-05 00:43:27.336125 | orchestrator | Monday 05 May 2025 00:43:27 +0000 (0:00:00.172) 0:01:11.597 ************ 2025-05-05 00:43:27.336263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:27.337414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:27.338535 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:27.338571 | orchestrator | 2025-05-05 00:43:27.339093 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-05 00:43:27.339562 | orchestrator | Monday 05 May 2025 00:43:27 +0000 (0:00:00.170) 0:01:11.768 ************ 2025-05-05 00:43:27.512509 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:27.513476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:27.514523 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:27.515013 | orchestrator | 2025-05-05 00:43:27.515825 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-05 00:43:27.516523 | orchestrator | Monday 05 May 2025 00:43:27 +0000 (0:00:00.176) 0:01:11.944 ************ 2025-05-05 00:43:27.700913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:27.701192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:27.702549 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:27.703121 | orchestrator | 2025-05-05 00:43:27.704136 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-05 00:43:27.704583 | orchestrator | Monday 05 May 2025 00:43:27 +0000 (0:00:00.185) 0:01:12.130 ************ 2025-05-05 00:43:28.428821 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:28.429315 | orchestrator | 2025-05-05 00:43:28.429425 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-05 00:43:28.431002 | orchestrator | Monday 05 May 2025 00:43:28 +0000 (0:00:00.730) 0:01:12.861 ************ 2025-05-05 00:43:28.946376 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:28.946581 | orchestrator | 2025-05-05 00:43:28.946614 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-05 00:43:28.947555 | orchestrator | Monday 05 May 2025 00:43:28 +0000 (0:00:00.516) 0:01:13.377 ************ 2025-05-05 00:43:29.096528 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:29.097366 | orchestrator | 2025-05-05 00:43:29.100964 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-05 00:43:29.284568 | orchestrator | Monday 05 May 2025 00:43:29 +0000 (0:00:00.150) 0:01:13.528 ************ 2025-05-05 00:43:29.284750 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'vg_name': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'}) 2025-05-05 00:43:29.285949 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'vg_name': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}) 2025-05-05 00:43:29.286562 | orchestrator | 2025-05-05 00:43:29.286802 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-05 00:43:29.287326 | orchestrator | Monday 05 May 2025 00:43:29 +0000 (0:00:00.187) 0:01:13.716 ************ 2025-05-05 00:43:29.466537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:29.467481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:29.469601 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:29.470582 | orchestrator | 2025-05-05 00:43:29.471152 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-05 00:43:29.471587 | orchestrator | Monday 05 May 2025 00:43:29 +0000 (0:00:00.181) 0:01:13.898 ************ 2025-05-05 00:43:29.637537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:29.637777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:29.638874 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:29.640260 | orchestrator | 2025-05-05 00:43:29.640495 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-05 00:43:29.641850 | orchestrator | Monday 05 May 2025 00:43:29 +0000 (0:00:00.170) 0:01:14.069 ************ 2025-05-05 00:43:29.817817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'})  2025-05-05 00:43:29.818264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'})  2025-05-05 00:43:29.819606 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:29.819927 | orchestrator | 2025-05-05 00:43:29.820423 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-05 00:43:29.821165 | orchestrator | Monday 05 May 2025 00:43:29 +0000 (0:00:00.178) 0:01:14.248 ************ 2025-05-05 00:43:30.395345 | orchestrator | ok: [testbed-node-5] => { 2025-05-05 00:43:30.395893 | orchestrator |  "lvm_report": { 2025-05-05 00:43:30.396525 | orchestrator |  "lv": [ 2025-05-05 00:43:30.397221 | orchestrator |  { 2025-05-05 00:43:30.398279 | orchestrator |  "lv_name": "osd-block-19ded391-41bb-58c4-acef-51f998367f5e", 2025-05-05 00:43:30.398637 | orchestrator |  "vg_name": "ceph-19ded391-41bb-58c4-acef-51f998367f5e" 2025-05-05 00:43:30.399999 | orchestrator |  }, 2025-05-05 00:43:30.400742 | orchestrator |  { 2025-05-05 00:43:30.401157 | orchestrator |  "lv_name": "osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e", 2025-05-05 00:43:30.401911 | orchestrator |  "vg_name": "ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e" 2025-05-05 00:43:30.402531 | orchestrator |  } 2025-05-05 00:43:30.403433 | orchestrator |  ], 2025-05-05 00:43:30.403922 | orchestrator |  "pv": [ 2025-05-05 00:43:30.404511 | orchestrator |  { 2025-05-05 00:43:30.405072 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-05 00:43:30.406573 | orchestrator |  "vg_name": "ceph-19ded391-41bb-58c4-acef-51f998367f5e" 2025-05-05 00:43:30.407247 | orchestrator |  }, 2025-05-05 00:43:30.408020 | orchestrator |  { 2025-05-05 00:43:30.408409 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-05 00:43:30.409067 | orchestrator |  "vg_name": "ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e" 2025-05-05 00:43:30.409607 | orchestrator |  } 2025-05-05 00:43:30.410345 | orchestrator |  ] 2025-05-05 00:43:30.410720 | orchestrator |  } 2025-05-05 00:43:30.411485 | orchestrator | } 2025-05-05 00:43:30.411626 | orchestrator | 2025-05-05 00:43:30.412752 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:43:30.412769 | orchestrator | 2025-05-05 00:43:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:43:30.413119 | orchestrator | 2025-05-05 00:43:30 | INFO  | Please wait and do not abort execution. 2025-05-05 00:43:30.413132 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-05 00:43:30.413618 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-05 00:43:30.413954 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-05 00:43:30.414352 | orchestrator | 2025-05-05 00:43:30.414679 | orchestrator | 2025-05-05 00:43:30.415291 | orchestrator | 2025-05-05 00:43:30.415563 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:43:30.415596 | orchestrator | Monday 05 May 2025 00:43:30 +0000 (0:00:00.579) 0:01:14.827 ************ 2025-05-05 00:43:30.415970 | orchestrator | =============================================================================== 2025-05-05 00:43:30.417241 | orchestrator | Create block VGs -------------------------------------------------------- 5.76s 2025-05-05 00:43:30.418305 | orchestrator | Create block LVs -------------------------------------------------------- 4.04s 2025-05-05 00:43:30.418606 | orchestrator | Print LVM report data --------------------------------------------------- 2.14s 2025-05-05 00:43:30.419634 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.94s 2025-05-05 00:43:30.420407 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.73s 2025-05-05 00:43:30.423181 | orchestrator | Add known links to the list of available block devices ------------------ 1.67s 2025-05-05 00:43:30.423367 | orchestrator | Add known partitions to the list of available block devices ------------- 1.58s 2025-05-05 00:43:30.423402 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-05-05 00:43:30.423418 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-05-05 00:43:30.423442 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.51s 2025-05-05 00:43:30.423489 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.95s 2025-05-05 00:43:30.423577 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-05 00:43:30.423962 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-05-05 00:43:30.424308 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2025-05-05 00:43:30.424553 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.72s 2025-05-05 00:43:30.425068 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.68s 2025-05-05 00:43:30.425450 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.65s 2025-05-05 00:43:30.426316 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.64s 2025-05-05 00:43:30.427192 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-05 00:43:30.427637 | orchestrator | Print LVM VGs report data ----------------------------------------------- 0.63s 2025-05-05 00:43:32.256438 | orchestrator | 2025-05-05 00:43:32 | INFO  | Task 627723ca-b559-4669-ada0-06fb8a7ba838 (facts) was prepared for execution. 2025-05-05 00:43:35.361109 | orchestrator | 2025-05-05 00:43:32 | INFO  | It takes a moment until task 627723ca-b559-4669-ada0-06fb8a7ba838 (facts) has been started and output is visible here. 2025-05-05 00:43:35.361232 | orchestrator | 2025-05-05 00:43:35.361593 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-05 00:43:35.361621 | orchestrator | 2025-05-05 00:43:35.362925 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-05 00:43:35.364128 | orchestrator | Monday 05 May 2025 00:43:35 +0000 (0:00:00.196) 0:00:00.196 ************ 2025-05-05 00:43:36.361234 | orchestrator | ok: [testbed-manager] 2025-05-05 00:43:36.362281 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:43:36.363068 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:43:36.371650 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:43:36.371888 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:43:36.372401 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:43:36.376372 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:36.377474 | orchestrator | 2025-05-05 00:43:36.377972 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-05 00:43:36.378496 | orchestrator | Monday 05 May 2025 00:43:36 +0000 (0:00:01.004) 0:00:01.201 ************ 2025-05-05 00:43:36.520524 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:43:36.599567 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:43:36.675018 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:43:36.752315 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:43:36.827756 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:43:37.547873 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:37.548450 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:37.552151 | orchestrator | 2025-05-05 00:43:37.552912 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-05 00:43:37.552946 | orchestrator | 2025-05-05 00:43:37.552970 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-05 00:43:37.553897 | orchestrator | Monday 05 May 2025 00:43:37 +0000 (0:00:01.188) 0:00:02.389 ************ 2025-05-05 00:43:42.981642 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:43:42.982682 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:43:42.983452 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:43:42.986464 | orchestrator | ok: [testbed-manager] 2025-05-05 00:43:42.987810 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:43:42.987837 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:43:42.987858 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:43:42.988895 | orchestrator | 2025-05-05 00:43:42.990095 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-05 00:43:42.990757 | orchestrator | 2025-05-05 00:43:42.991584 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-05 00:43:42.992218 | orchestrator | Monday 05 May 2025 00:43:42 +0000 (0:00:05.434) 0:00:07.824 ************ 2025-05-05 00:43:43.300290 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:43:43.373216 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:43:43.445031 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:43:43.534232 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:43:43.606582 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:43:43.645999 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:43:43.648684 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:43:43.649437 | orchestrator | 2025-05-05 00:43:43.650260 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:43:43.650851 | orchestrator | 2025-05-05 00:43:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-05 00:43:43.651279 | orchestrator | 2025-05-05 00:43:43 | INFO  | Please wait and do not abort execution. 2025-05-05 00:43:43.651820 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:43:43.652469 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:43:43.652981 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:43:43.653634 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:43:43.654003 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:43:43.654373 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:43:43.654756 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:43:43.655133 | orchestrator | 2025-05-05 00:43:43.655516 | orchestrator | Monday 05 May 2025 00:43:43 +0000 (0:00:00.665) 0:00:08.489 ************ 2025-05-05 00:43:43.655867 | orchestrator | =============================================================================== 2025-05-05 00:43:43.656383 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.43s 2025-05-05 00:43:43.656741 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2025-05-05 00:43:43.657097 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2025-05-05 00:43:43.657486 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2025-05-05 00:43:44.154557 | orchestrator | 2025-05-05 00:43:44.157606 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon May 5 00:43:44 UTC 2025 2025-05-05 00:43:45.546479 | orchestrator | 2025-05-05 00:43:45.546616 | orchestrator | 2025-05-05 00:43:45 | INFO  | Collection nutshell is prepared for execution 2025-05-05 00:43:45.550840 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [0] - dotfiles 2025-05-05 00:43:45.550893 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [0] - homer 2025-05-05 00:43:45.550953 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [0] - netdata 2025-05-05 00:43:45.550972 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [0] - openstackclient 2025-05-05 00:43:45.550992 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [0] - phpmyadmin 2025-05-05 00:43:45.552197 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [0] - common 2025-05-05 00:43:45.552232 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [1] -- loadbalancer 2025-05-05 00:43:45.552352 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [2] --- opensearch 2025-05-05 00:43:45.552381 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [2] --- mariadb-ng 2025-05-05 00:43:45.552862 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [3] ---- horizon 2025-05-05 00:43:45.552888 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [3] ---- keystone 2025-05-05 00:43:45.552904 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [4] ----- neutron 2025-05-05 00:43:45.552920 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [5] ------ wait-for-nova 2025-05-05 00:43:45.552937 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [5] ------ octavia 2025-05-05 00:43:45.552957 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [4] ----- barbican 2025-05-05 00:43:45.553083 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [4] ----- designate 2025-05-05 00:43:45.553112 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [4] ----- ironic 2025-05-05 00:43:45.553405 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [4] ----- placement 2025-05-05 00:43:45.553431 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [4] ----- magnum 2025-05-05 00:43:45.553452 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [1] -- openvswitch 2025-05-05 00:43:45.553617 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [2] --- ovn 2025-05-05 00:43:45.553648 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [1] -- memcached 2025-05-05 00:43:45.553870 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [1] -- redis 2025-05-05 00:43:45.553922 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [1] -- rabbitmq-ng 2025-05-05 00:43:45.553944 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [0] - kubernetes 2025-05-05 00:43:45.554002 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [1] -- kubeconfig 2025-05-05 00:43:45.554070 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [1] -- copy-kubeconfig 2025-05-05 00:43:45.554244 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [0] - ceph 2025-05-05 00:43:45.555351 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [1] -- ceph-pools 2025-05-05 00:43:45.555727 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [2] --- copy-ceph-keys 2025-05-05 00:43:45.555753 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [3] ---- cephclient 2025-05-05 00:43:45.555773 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-05 00:43:45.555945 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [4] ----- wait-for-keystone 2025-05-05 00:43:45.555970 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-05 00:43:45.556009 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [5] ------ glance 2025-05-05 00:43:45.556025 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [5] ------ cinder 2025-05-05 00:43:45.556045 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [5] ------ nova 2025-05-05 00:43:45.685217 | orchestrator | 2025-05-05 00:43:45 | INFO  | A [4] ----- prometheus 2025-05-05 00:43:45.686108 | orchestrator | 2025-05-05 00:43:45 | INFO  | D [5] ------ grafana 2025-05-05 00:43:45.686161 | orchestrator | 2025-05-05 00:43:45 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-05 00:43:47.922253 | orchestrator | 2025-05-05 00:43:45 | INFO  | Tasks are running in the background 2025-05-05 00:43:47.922390 | orchestrator | 2025-05-05 00:43:47 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-05 00:43:50.021030 | orchestrator | 2025-05-05 00:43:50 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:43:50.022313 | orchestrator | 2025-05-05 00:43:50 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:43:50.022639 | orchestrator | 2025-05-05 00:43:50 | INFO  | Task c13e3c35-33ed-4041-b723-8e3724b67956 is in state STARTED 2025-05-05 00:43:50.023071 | orchestrator | 2025-05-05 00:43:50 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:43:50.023543 | orchestrator | 2025-05-05 00:43:50 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:43:50.024074 | orchestrator | 2025-05-05 00:43:50 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:43:50.024147 | orchestrator | 2025-05-05 00:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:43:53.060128 | orchestrator | 2025-05-05 00:43:53 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:43:53.063010 | orchestrator | 2025-05-05 00:43:53 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:43:53.063385 | orchestrator | 2025-05-05 00:43:53 | INFO  | Task c13e3c35-33ed-4041-b723-8e3724b67956 is in state STARTED 2025-05-05 00:43:53.063416 | orchestrator | 2025-05-05 00:43:53 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:43:53.063438 | orchestrator | 2025-05-05 00:43:53 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:43:53.064460 | orchestrator | 2025-05-05 00:43:53 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:43:56.115439 | orchestrator | 2025-05-05 00:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:43:56.115573 | orchestrator | 2025-05-05 00:43:56 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:43:59.150713 | orchestrator | 2025-05-05 00:43:56 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:43:59.150824 | orchestrator | 2025-05-05 00:43:56 | INFO  | Task c13e3c35-33ed-4041-b723-8e3724b67956 is in state STARTED 2025-05-05 00:43:59.150846 | orchestrator | 2025-05-05 00:43:56 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:43:59.150873 | orchestrator | 2025-05-05 00:43:56 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:43:59.150887 | orchestrator | 2025-05-05 00:43:56 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:43:59.150901 | orchestrator | 2025-05-05 00:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:43:59.150929 | orchestrator | 2025-05-05 00:43:59 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:43:59.151309 | orchestrator | 2025-05-05 00:43:59 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:43:59.151540 | orchestrator | 2025-05-05 00:43:59 | INFO  | Task c13e3c35-33ed-4041-b723-8e3724b67956 is in state STARTED 2025-05-05 00:43:59.151573 | orchestrator | 2025-05-05 00:43:59 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:43:59.152042 | orchestrator | 2025-05-05 00:43:59 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:43:59.152430 | orchestrator | 2025-05-05 00:43:59 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:02.199402 | orchestrator | 2025-05-05 00:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:02.199542 | orchestrator | 2025-05-05 00:44:02 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:02.201845 | orchestrator | 2025-05-05 00:44:02 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:02.202305 | orchestrator | 2025-05-05 00:44:02 | INFO  | Task c13e3c35-33ed-4041-b723-8e3724b67956 is in state STARTED 2025-05-05 00:44:02.203815 | orchestrator | 2025-05-05 00:44:02 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:02.204356 | orchestrator | 2025-05-05 00:44:02 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:02.204393 | orchestrator | 2025-05-05 00:44:02 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:05.257432 | orchestrator | 2025-05-05 00:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:05.257551 | orchestrator | 2025-05-05 00:44:05 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:05.259891 | orchestrator | 2025-05-05 00:44:05 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:05.260664 | orchestrator | 2025-05-05 00:44:05 | INFO  | Task c13e3c35-33ed-4041-b723-8e3724b67956 is in state SUCCESS 2025-05-05 00:44:05.260758 | orchestrator | 2025-05-05 00:44:05.260784 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-05 00:44:05.260800 | orchestrator | 2025-05-05 00:44:05.260815 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-05 00:44:05.260830 | orchestrator | Monday 05 May 2025 00:43:53 +0000 (0:00:00.326) 0:00:00.326 ************ 2025-05-05 00:44:05.260844 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:05.260859 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:44:05.260873 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:44:05.260887 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:44:05.260901 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:44:05.260915 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:44:05.260929 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:44:05.260943 | orchestrator | 2025-05-05 00:44:05.260957 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-05 00:44:05.260977 | orchestrator | Monday 05 May 2025 00:43:56 +0000 (0:00:03.237) 0:00:03.564 ************ 2025-05-05 00:44:05.260992 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-05 00:44:05.261006 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-05 00:44:05.261024 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-05 00:44:05.261038 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-05 00:44:05.261052 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-05 00:44:05.261066 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-05 00:44:05.261080 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-05 00:44:05.261113 | orchestrator | 2025-05-05 00:44:05.261128 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-05 00:44:05.261142 | orchestrator | Monday 05 May 2025 00:43:58 +0000 (0:00:02.377) 0:00:05.941 ************ 2025-05-05 00:44:05.261201 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-05 00:43:57.213443', 'end': '2025-05-05 00:43:57.219826', 'delta': '0:00:00.006383', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-05 00:44:05.261224 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-05 00:43:57.225327', 'end': '2025-05-05 00:43:57.233684', 'delta': '0:00:00.008357', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-05 00:44:05.261240 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-05 00:43:57.419770', 'end': '2025-05-05 00:43:57.428868', 'delta': '0:00:00.009098', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-05 00:44:05.261281 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-05 00:43:57.714182', 'end': '2025-05-05 00:43:57.722847', 'delta': '0:00:00.008665', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-05 00:44:05.261299 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-05 00:43:57.975115', 'end': '2025-05-05 00:43:57.981778', 'delta': '0:00:00.006663', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-05 00:44:05.261324 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-05 00:43:58.353220', 'end': '2025-05-05 00:43:58.362444', 'delta': '0:00:00.009224', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-05 00:44:05.261345 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-05 00:43:58.584073', 'end': '2025-05-05 00:43:58.590488', 'delta': '0:00:00.006415', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-05 00:44:05.261362 | orchestrator | 2025-05-05 00:44:05.261378 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-05 00:44:05.261395 | orchestrator | Monday 05 May 2025 00:44:00 +0000 (0:00:01.684) 0:00:07.626 ************ 2025-05-05 00:44:05.261410 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-05 00:44:05.261424 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-05 00:44:05.261438 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-05 00:44:05.261452 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-05 00:44:05.261466 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-05 00:44:05.261480 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-05 00:44:05.261494 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-05 00:44:05.261507 | orchestrator | 2025-05-05 00:44:05.261521 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:44:05.261535 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:05.261551 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:05.261565 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:05.261585 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:05.261752 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:05.261777 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:05.261799 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:05.261814 | orchestrator | 2025-05-05 00:44:05.261829 | orchestrator | Monday 05 May 2025 00:44:03 +0000 (0:00:03.265) 0:00:10.891 ************ 2025-05-05 00:44:05.261845 | orchestrator | =============================================================================== 2025-05-05 00:44:05.261860 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.27s 2025-05-05 00:44:05.261875 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.24s 2025-05-05 00:44:05.261890 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.38s 2025-05-05 00:44:05.261905 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.68s 2025-05-05 00:44:05.261926 | orchestrator | 2025-05-05 00:44:05 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:05.262386 | orchestrator | 2025-05-05 00:44:05 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:05.262419 | orchestrator | 2025-05-05 00:44:05 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:05.264215 | orchestrator | 2025-05-05 00:44:05 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:08.317748 | orchestrator | 2025-05-05 00:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:08.317869 | orchestrator | 2025-05-05 00:44:08 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:08.320588 | orchestrator | 2025-05-05 00:44:08 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:08.321388 | orchestrator | 2025-05-05 00:44:08 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:08.323365 | orchestrator | 2025-05-05 00:44:08 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:08.324681 | orchestrator | 2025-05-05 00:44:08 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:08.332503 | orchestrator | 2025-05-05 00:44:08 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:11.401156 | orchestrator | 2025-05-05 00:44:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:11.401320 | orchestrator | 2025-05-05 00:44:11 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:11.401415 | orchestrator | 2025-05-05 00:44:11 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:11.402288 | orchestrator | 2025-05-05 00:44:11 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:11.404145 | orchestrator | 2025-05-05 00:44:11 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:11.407812 | orchestrator | 2025-05-05 00:44:11 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:11.410679 | orchestrator | 2025-05-05 00:44:11 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:14.475183 | orchestrator | 2025-05-05 00:44:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:14.475297 | orchestrator | 2025-05-05 00:44:14 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:14.475473 | orchestrator | 2025-05-05 00:44:14 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:14.477836 | orchestrator | 2025-05-05 00:44:14 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:14.478424 | orchestrator | 2025-05-05 00:44:14 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:14.478469 | orchestrator | 2025-05-05 00:44:14 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:17.526341 | orchestrator | 2025-05-05 00:44:14 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:17.526444 | orchestrator | 2025-05-05 00:44:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:17.526479 | orchestrator | 2025-05-05 00:44:17 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:20.560214 | orchestrator | 2025-05-05 00:44:17 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:20.560325 | orchestrator | 2025-05-05 00:44:17 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:20.560345 | orchestrator | 2025-05-05 00:44:17 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:20.560362 | orchestrator | 2025-05-05 00:44:17 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:20.560377 | orchestrator | 2025-05-05 00:44:17 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:20.560393 | orchestrator | 2025-05-05 00:44:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:20.560423 | orchestrator | 2025-05-05 00:44:20 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:20.561469 | orchestrator | 2025-05-05 00:44:20 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:20.563378 | orchestrator | 2025-05-05 00:44:20 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:20.566813 | orchestrator | 2025-05-05 00:44:20 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:23.615251 | orchestrator | 2025-05-05 00:44:20 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:23.615358 | orchestrator | 2025-05-05 00:44:20 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:23.615378 | orchestrator | 2025-05-05 00:44:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:23.615427 | orchestrator | 2025-05-05 00:44:23 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:23.616730 | orchestrator | 2025-05-05 00:44:23 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:23.616793 | orchestrator | 2025-05-05 00:44:23 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state STARTED 2025-05-05 00:44:23.618005 | orchestrator | 2025-05-05 00:44:23 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:23.619261 | orchestrator | 2025-05-05 00:44:23 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:23.621273 | orchestrator | 2025-05-05 00:44:23 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:26.675129 | orchestrator | 2025-05-05 00:44:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:26.675280 | orchestrator | 2025-05-05 00:44:26 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:26.679870 | orchestrator | 2025-05-05 00:44:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:26.681321 | orchestrator | 2025-05-05 00:44:26 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:26.681896 | orchestrator | 2025-05-05 00:44:26 | INFO  | Task 8c6078c2-8d07-474b-a204-8d650f3290af is in state SUCCESS 2025-05-05 00:44:26.687378 | orchestrator | 2025-05-05 00:44:26 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:26.693448 | orchestrator | 2025-05-05 00:44:26 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:29.761141 | orchestrator | 2025-05-05 00:44:26 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:29.761290 | orchestrator | 2025-05-05 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:29.761331 | orchestrator | 2025-05-05 00:44:29 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:29.766092 | orchestrator | 2025-05-05 00:44:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:29.766132 | orchestrator | 2025-05-05 00:44:29 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:29.769226 | orchestrator | 2025-05-05 00:44:29 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:29.769351 | orchestrator | 2025-05-05 00:44:29 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:29.772842 | orchestrator | 2025-05-05 00:44:29 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:29.773179 | orchestrator | 2025-05-05 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:32.811953 | orchestrator | 2025-05-05 00:44:32 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:32.813595 | orchestrator | 2025-05-05 00:44:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:32.813923 | orchestrator | 2025-05-05 00:44:32 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:32.815128 | orchestrator | 2025-05-05 00:44:32 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:32.816855 | orchestrator | 2025-05-05 00:44:32 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:32.818070 | orchestrator | 2025-05-05 00:44:32 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:32.818205 | orchestrator | 2025-05-05 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:35.883210 | orchestrator | 2025-05-05 00:44:35 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:35.883653 | orchestrator | 2025-05-05 00:44:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:35.884425 | orchestrator | 2025-05-05 00:44:35 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:35.885355 | orchestrator | 2025-05-05 00:44:35 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:35.888133 | orchestrator | 2025-05-05 00:44:35 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:35.888638 | orchestrator | 2025-05-05 00:44:35 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:38.960670 | orchestrator | 2025-05-05 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:38.960850 | orchestrator | 2025-05-05 00:44:38 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:38.962419 | orchestrator | 2025-05-05 00:44:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:38.962482 | orchestrator | 2025-05-05 00:44:38 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:38.965362 | orchestrator | 2025-05-05 00:44:38 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:38.969775 | orchestrator | 2025-05-05 00:44:38 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:38.970476 | orchestrator | 2025-05-05 00:44:38 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:42.015930 | orchestrator | 2025-05-05 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:42.016059 | orchestrator | 2025-05-05 00:44:42 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:42.016139 | orchestrator | 2025-05-05 00:44:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:42.016483 | orchestrator | 2025-05-05 00:44:42 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state STARTED 2025-05-05 00:44:42.019185 | orchestrator | 2025-05-05 00:44:42 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:42.019440 | orchestrator | 2025-05-05 00:44:42 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:42.019900 | orchestrator | 2025-05-05 00:44:42 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:45.069111 | orchestrator | 2025-05-05 00:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:45.069289 | orchestrator | 2025-05-05 00:44:45 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:45.069382 | orchestrator | 2025-05-05 00:44:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:45.069650 | orchestrator | 2025-05-05 00:44:45 | INFO  | Task dea0f289-3c21-4921-bec8-5889367a64e1 is in state SUCCESS 2025-05-05 00:44:45.073451 | orchestrator | 2025-05-05 00:44:45 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:45.074927 | orchestrator | 2025-05-05 00:44:45 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:45.075471 | orchestrator | 2025-05-05 00:44:45 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:48.117009 | orchestrator | 2025-05-05 00:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:48.117143 | orchestrator | 2025-05-05 00:44:48 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:48.117569 | orchestrator | 2025-05-05 00:44:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:48.117697 | orchestrator | 2025-05-05 00:44:48 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:48.117960 | orchestrator | 2025-05-05 00:44:48 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:48.119639 | orchestrator | 2025-05-05 00:44:48 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:48.119717 | orchestrator | 2025-05-05 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:51.164714 | orchestrator | 2025-05-05 00:44:51 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:51.165940 | orchestrator | 2025-05-05 00:44:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:51.166681 | orchestrator | 2025-05-05 00:44:51 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:51.167981 | orchestrator | 2025-05-05 00:44:51 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state STARTED 2025-05-05 00:44:51.169118 | orchestrator | 2025-05-05 00:44:51 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:54.214709 | orchestrator | 2025-05-05 00:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:54.214913 | orchestrator | 2025-05-05 00:44:54 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:54.215004 | orchestrator | 2025-05-05 00:44:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:54.215879 | orchestrator | 2025-05-05 00:44:54 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:54.216073 | orchestrator | 2025-05-05 00:44:54 | INFO  | Task 7b75540a-76b6-4d5d-b278-9ffde8ee3f7e is in state SUCCESS 2025-05-05 00:44:54.216426 | orchestrator | 2025-05-05 00:44:54.216485 | orchestrator | 2025-05-05 00:44:54.216503 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-05 00:44:54.216517 | orchestrator | 2025-05-05 00:44:54.216532 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-05 00:44:54.216547 | orchestrator | Monday 05 May 2025 00:43:52 +0000 (0:00:00.416) 0:00:00.416 ************ 2025-05-05 00:44:54.216561 | orchestrator | ok: [testbed-manager] => { 2025-05-05 00:44:54.216577 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-05 00:44:54.216593 | orchestrator | } 2025-05-05 00:44:54.216608 | orchestrator | 2025-05-05 00:44:54.216622 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-05 00:44:54.216636 | orchestrator | Monday 05 May 2025 00:43:52 +0000 (0:00:00.221) 0:00:00.638 ************ 2025-05-05 00:44:54.216650 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.216713 | orchestrator | 2025-05-05 00:44:54.216730 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-05 00:44:54.216784 | orchestrator | Monday 05 May 2025 00:43:53 +0000 (0:00:00.963) 0:00:01.602 ************ 2025-05-05 00:44:54.216802 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-05 00:44:54.216816 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-05 00:44:54.216830 | orchestrator | 2025-05-05 00:44:54.216845 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-05 00:44:54.216859 | orchestrator | Monday 05 May 2025 00:43:54 +0000 (0:00:00.838) 0:00:02.441 ************ 2025-05-05 00:44:54.216873 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.216932 | orchestrator | 2025-05-05 00:44:54.216948 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-05 00:44:54.216962 | orchestrator | Monday 05 May 2025 00:43:56 +0000 (0:00:01.720) 0:00:04.161 ************ 2025-05-05 00:44:54.216977 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.216991 | orchestrator | 2025-05-05 00:44:54.217005 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-05 00:44:54.217019 | orchestrator | Monday 05 May 2025 00:43:57 +0000 (0:00:01.225) 0:00:05.386 ************ 2025-05-05 00:44:54.217033 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-05 00:44:54.217047 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.217064 | orchestrator | 2025-05-05 00:44:54.217080 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-05 00:44:54.217096 | orchestrator | Monday 05 May 2025 00:44:22 +0000 (0:00:25.329) 0:00:30.716 ************ 2025-05-05 00:44:54.217111 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.217127 | orchestrator | 2025-05-05 00:44:54.217142 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:44:54.217179 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.217196 | orchestrator | 2025-05-05 00:44:54.217212 | orchestrator | Monday 05 May 2025 00:44:24 +0000 (0:00:01.969) 0:00:32.685 ************ 2025-05-05 00:44:54.217228 | orchestrator | =============================================================================== 2025-05-05 00:44:54.217243 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.33s 2025-05-05 00:44:54.217259 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.97s 2025-05-05 00:44:54.217274 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.72s 2025-05-05 00:44:54.217294 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.23s 2025-05-05 00:44:54.217310 | orchestrator | osism.services.homer : Create traefik external network ------------------ 0.96s 2025-05-05 00:44:54.217326 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.84s 2025-05-05 00:44:54.217342 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.22s 2025-05-05 00:44:54.217357 | orchestrator | 2025-05-05 00:44:54.217372 | orchestrator | 2025-05-05 00:44:54.217388 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-05 00:44:54.217403 | orchestrator | 2025-05-05 00:44:54.217418 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-05 00:44:54.217432 | orchestrator | Monday 05 May 2025 00:43:53 +0000 (0:00:00.403) 0:00:00.403 ************ 2025-05-05 00:44:54.217447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-05 00:44:54.217462 | orchestrator | 2025-05-05 00:44:54.217476 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-05 00:44:54.217490 | orchestrator | Monday 05 May 2025 00:43:53 +0000 (0:00:00.462) 0:00:00.866 ************ 2025-05-05 00:44:54.217504 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-05 00:44:54.217518 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-05 00:44:54.217533 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-05 00:44:54.217547 | orchestrator | 2025-05-05 00:44:54.217561 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-05 00:44:54.217575 | orchestrator | Monday 05 May 2025 00:43:55 +0000 (0:00:01.371) 0:00:02.237 ************ 2025-05-05 00:44:54.217590 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.217604 | orchestrator | 2025-05-05 00:44:54.217618 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-05 00:44:54.217632 | orchestrator | Monday 05 May 2025 00:43:56 +0000 (0:00:01.267) 0:00:03.505 ************ 2025-05-05 00:44:54.217647 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-05 00:44:54.217661 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.217675 | orchestrator | 2025-05-05 00:44:54.217699 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-05 00:44:54.217909 | orchestrator | Monday 05 May 2025 00:44:36 +0000 (0:00:39.597) 0:00:43.103 ************ 2025-05-05 00:44:54.217934 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.217948 | orchestrator | 2025-05-05 00:44:54.217963 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-05 00:44:54.217978 | orchestrator | Monday 05 May 2025 00:44:37 +0000 (0:00:01.829) 0:00:44.932 ************ 2025-05-05 00:44:54.217992 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.218006 | orchestrator | 2025-05-05 00:44:54.218067 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-05 00:44:54.218086 | orchestrator | Monday 05 May 2025 00:44:38 +0000 (0:00:00.713) 0:00:45.646 ************ 2025-05-05 00:44:54.218100 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.218125 | orchestrator | 2025-05-05 00:44:54.218140 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-05 00:44:54.218154 | orchestrator | Monday 05 May 2025 00:44:41 +0000 (0:00:02.640) 0:00:48.286 ************ 2025-05-05 00:44:54.218169 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.218183 | orchestrator | 2025-05-05 00:44:54.218197 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-05 00:44:54.218211 | orchestrator | Monday 05 May 2025 00:44:42 +0000 (0:00:01.012) 0:00:49.298 ************ 2025-05-05 00:44:54.218248 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.218263 | orchestrator | 2025-05-05 00:44:54.218278 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-05 00:44:54.218292 | orchestrator | Monday 05 May 2025 00:44:43 +0000 (0:00:00.841) 0:00:50.140 ************ 2025-05-05 00:44:54.218306 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.218320 | orchestrator | 2025-05-05 00:44:54.218334 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:44:54.218361 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.218375 | orchestrator | 2025-05-05 00:44:54.218389 | orchestrator | Monday 05 May 2025 00:44:43 +0000 (0:00:00.720) 0:00:50.861 ************ 2025-05-05 00:44:54.218403 | orchestrator | =============================================================================== 2025-05-05 00:44:54.218417 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 39.60s 2025-05-05 00:44:54.218431 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.64s 2025-05-05 00:44:54.218446 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.83s 2025-05-05 00:44:54.218466 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.37s 2025-05-05 00:44:54.218480 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.27s 2025-05-05 00:44:54.218494 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.01s 2025-05-05 00:44:54.218508 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.84s 2025-05-05 00:44:54.218523 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.72s 2025-05-05 00:44:54.218540 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.71s 2025-05-05 00:44:54.218557 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.46s 2025-05-05 00:44:54.218572 | orchestrator | 2025-05-05 00:44:54.218595 | orchestrator | 2025-05-05 00:44:54.218609 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:44:54.218624 | orchestrator | 2025-05-05 00:44:54.218638 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:44:54.218652 | orchestrator | Monday 05 May 2025 00:43:53 +0000 (0:00:00.343) 0:00:00.343 ************ 2025-05-05 00:44:54.218666 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-05 00:44:54.218680 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-05 00:44:54.218694 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-05 00:44:54.218708 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-05 00:44:54.218723 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-05 00:44:54.218737 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-05 00:44:54.218777 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-05 00:44:54.218792 | orchestrator | 2025-05-05 00:44:54.218806 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-05 00:44:54.218820 | orchestrator | 2025-05-05 00:44:54.218834 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-05 00:44:54.218848 | orchestrator | Monday 05 May 2025 00:43:55 +0000 (0:00:01.336) 0:00:01.680 ************ 2025-05-05 00:44:54.218882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:44:54.218899 | orchestrator | 2025-05-05 00:44:54.218913 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-05 00:44:54.218928 | orchestrator | Monday 05 May 2025 00:43:57 +0000 (0:00:02.106) 0:00:03.786 ************ 2025-05-05 00:44:54.218942 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:44:54.218956 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:44:54.218970 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:44:54.218984 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.218998 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:44:54.219012 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:44:54.219026 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:44:54.219040 | orchestrator | 2025-05-05 00:44:54.219054 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-05 00:44:54.219069 | orchestrator | Monday 05 May 2025 00:43:59 +0000 (0:00:02.314) 0:00:06.101 ************ 2025-05-05 00:44:54.219083 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.219097 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:44:54.219111 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:44:54.219125 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:44:54.219139 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:44:54.219152 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:44:54.219166 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:44:54.219180 | orchestrator | 2025-05-05 00:44:54.219194 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-05 00:44:54.219208 | orchestrator | Monday 05 May 2025 00:44:02 +0000 (0:00:02.906) 0:00:09.007 ************ 2025-05-05 00:44:54.219223 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.219237 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:44:54.219251 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:44:54.219270 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:44:54.219284 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:44:54.219298 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:44:54.219312 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:44:54.219326 | orchestrator | 2025-05-05 00:44:54.219340 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-05 00:44:54.219354 | orchestrator | Monday 05 May 2025 00:44:04 +0000 (0:00:01.992) 0:00:11.000 ************ 2025-05-05 00:44:54.219368 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.219382 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:44:54.219396 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:44:54.219410 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:44:54.219424 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:44:54.219438 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:44:54.219452 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:44:54.219466 | orchestrator | 2025-05-05 00:44:54.219481 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-05 00:44:54.219495 | orchestrator | Monday 05 May 2025 00:44:13 +0000 (0:00:09.455) 0:00:20.455 ************ 2025-05-05 00:44:54.219509 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:44:54.219523 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:44:54.219537 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:44:54.219551 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:44:54.219565 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:44:54.219579 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:44:54.219593 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.219607 | orchestrator | 2025-05-05 00:44:54.219622 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-05 00:44:54.219636 | orchestrator | Monday 05 May 2025 00:44:30 +0000 (0:00:16.952) 0:00:37.408 ************ 2025-05-05 00:44:54.219650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:44:54.219680 | orchestrator | 2025-05-05 00:44:54.219695 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-05 00:44:54.219709 | orchestrator | Monday 05 May 2025 00:44:32 +0000 (0:00:01.698) 0:00:39.106 ************ 2025-05-05 00:44:54.219723 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-05 00:44:54.219737 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-05 00:44:54.219816 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-05 00:44:54.219835 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-05 00:44:54.219858 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-05 00:44:54.219873 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-05 00:44:54.219888 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-05 00:44:54.219902 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-05 00:44:54.219917 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-05 00:44:54.219931 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-05 00:44:54.219944 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-05 00:44:54.219958 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-05 00:44:54.219972 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-05 00:44:54.219986 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-05 00:44:54.220000 | orchestrator | 2025-05-05 00:44:54.220014 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-05 00:44:54.220028 | orchestrator | Monday 05 May 2025 00:44:38 +0000 (0:00:06.240) 0:00:45.347 ************ 2025-05-05 00:44:54.220042 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.220056 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:44:54.220071 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:44:54.220085 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:44:54.220099 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:44:54.220112 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:44:54.220126 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:44:54.220140 | orchestrator | 2025-05-05 00:44:54.220154 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-05 00:44:54.220169 | orchestrator | Monday 05 May 2025 00:44:39 +0000 (0:00:01.230) 0:00:46.577 ************ 2025-05-05 00:44:54.220183 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.220197 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:44:54.220211 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:44:54.220225 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:44:54.220239 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:44:54.220252 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:44:54.220266 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:44:54.220280 | orchestrator | 2025-05-05 00:44:54.220294 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-05 00:44:54.220314 | orchestrator | Monday 05 May 2025 00:44:41 +0000 (0:00:02.007) 0:00:48.584 ************ 2025-05-05 00:44:54.220328 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:44:54.220341 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:44:54.220353 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:44:54.220365 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.220378 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:44:54.220390 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:44:54.220402 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:44:54.220415 | orchestrator | 2025-05-05 00:44:54.220427 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-05 00:44:54.220440 | orchestrator | Monday 05 May 2025 00:44:44 +0000 (0:00:02.433) 0:00:51.018 ************ 2025-05-05 00:44:54.220453 | orchestrator | ok: [testbed-manager] 2025-05-05 00:44:54.220465 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:44:54.220485 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:44:54.220497 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:44:54.220510 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:44:54.220522 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:44:54.220535 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:44:54.220547 | orchestrator | 2025-05-05 00:44:54.220560 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-05 00:44:54.220572 | orchestrator | Monday 05 May 2025 00:44:46 +0000 (0:00:02.119) 0:00:53.138 ************ 2025-05-05 00:44:54.220585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-05 00:44:54.220599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:44:54.220612 | orchestrator | 2025-05-05 00:44:54.220624 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-05 00:44:54.220637 | orchestrator | Monday 05 May 2025 00:44:48 +0000 (0:00:01.750) 0:00:54.889 ************ 2025-05-05 00:44:54.220649 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.220662 | orchestrator | 2025-05-05 00:44:54.220674 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-05 00:44:54.220687 | orchestrator | Monday 05 May 2025 00:44:50 +0000 (0:00:01.734) 0:00:56.623 ************ 2025-05-05 00:44:54.220699 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:44:54.220712 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:44:54.220725 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:44:54.220737 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:44:54.220796 | orchestrator | changed: [testbed-manager] 2025-05-05 00:44:54.220819 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:44:54.220833 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:44:54.220845 | orchestrator | 2025-05-05 00:44:54.220858 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:44:54.220871 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.220884 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.220897 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.220914 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.220933 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.221007 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.221023 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:44:54.221035 | orchestrator | 2025-05-05 00:44:54.221048 | orchestrator | Monday 05 May 2025 00:44:53 +0000 (0:00:03.501) 0:01:00.124 ************ 2025-05-05 00:44:54.221061 | orchestrator | =============================================================================== 2025-05-05 00:44:54.221074 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.95s 2025-05-05 00:44:54.221086 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.46s 2025-05-05 00:44:54.221099 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.24s 2025-05-05 00:44:54.221111 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.50s 2025-05-05 00:44:54.221133 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.91s 2025-05-05 00:44:54.221146 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.43s 2025-05-05 00:44:54.221158 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.31s 2025-05-05 00:44:54.221171 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.12s 2025-05-05 00:44:54.221183 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.11s 2025-05-05 00:44:54.221195 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.01s 2025-05-05 00:44:54.221208 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.99s 2025-05-05 00:44:54.221220 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.75s 2025-05-05 00:44:54.221232 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.73s 2025-05-05 00:44:54.221245 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.70s 2025-05-05 00:44:54.221258 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2025-05-05 00:44:54.221271 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.23s 2025-05-05 00:44:54.221286 | orchestrator | 2025-05-05 00:44:54 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:44:57.249126 | orchestrator | 2025-05-05 00:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:44:57.249252 | orchestrator | 2025-05-05 00:44:57 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:44:57.249339 | orchestrator | 2025-05-05 00:44:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:44:57.249363 | orchestrator | 2025-05-05 00:44:57 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:44:57.249714 | orchestrator | 2025-05-05 00:44:57 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:45:00.285106 | orchestrator | 2025-05-05 00:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:00.285225 | orchestrator | 2025-05-05 00:45:00 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:00.285304 | orchestrator | 2025-05-05 00:45:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:00.285329 | orchestrator | 2025-05-05 00:45:00 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:00.285633 | orchestrator | 2025-05-05 00:45:00 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:45:00.285709 | orchestrator | 2025-05-05 00:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:03.342906 | orchestrator | 2025-05-05 00:45:03 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:06.392671 | orchestrator | 2025-05-05 00:45:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:06.392880 | orchestrator | 2025-05-05 00:45:03 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:06.392906 | orchestrator | 2025-05-05 00:45:03 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:45:06.392922 | orchestrator | 2025-05-05 00:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:06.393022 | orchestrator | 2025-05-05 00:45:06 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:06.393116 | orchestrator | 2025-05-05 00:45:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:06.393979 | orchestrator | 2025-05-05 00:45:06 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:06.395033 | orchestrator | 2025-05-05 00:45:06 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state STARTED 2025-05-05 00:45:09.450261 | orchestrator | 2025-05-05 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:09.450403 | orchestrator | 2025-05-05 00:45:09 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:09.452333 | orchestrator | 2025-05-05 00:45:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:09.453370 | orchestrator | 2025-05-05 00:45:09 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:09.453974 | orchestrator | 2025-05-05 00:45:09 | INFO  | Task 42fa8161-5ae8-46a1-a903-a00656f394dd is in state SUCCESS 2025-05-05 00:45:09.454135 | orchestrator | 2025-05-05 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:12.520041 | orchestrator | 2025-05-05 00:45:12 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:12.525704 | orchestrator | 2025-05-05 00:45:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:12.534598 | orchestrator | 2025-05-05 00:45:12 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:15.569023 | orchestrator | 2025-05-05 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:15.569154 | orchestrator | 2025-05-05 00:45:15 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:15.570349 | orchestrator | 2025-05-05 00:45:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:15.572028 | orchestrator | 2025-05-05 00:45:15 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:18.606975 | orchestrator | 2025-05-05 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:18.607096 | orchestrator | 2025-05-05 00:45:18 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:18.607684 | orchestrator | 2025-05-05 00:45:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:21.653078 | orchestrator | 2025-05-05 00:45:18 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:21.653210 | orchestrator | 2025-05-05 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:21.653249 | orchestrator | 2025-05-05 00:45:21 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:21.654833 | orchestrator | 2025-05-05 00:45:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:21.655619 | orchestrator | 2025-05-05 00:45:21 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:21.655737 | orchestrator | 2025-05-05 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:24.701587 | orchestrator | 2025-05-05 00:45:24 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:24.703015 | orchestrator | 2025-05-05 00:45:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:24.706530 | orchestrator | 2025-05-05 00:45:24 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:27.747129 | orchestrator | 2025-05-05 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:27.747283 | orchestrator | 2025-05-05 00:45:27 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:27.747791 | orchestrator | 2025-05-05 00:45:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:27.747963 | orchestrator | 2025-05-05 00:45:27 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:30.789556 | orchestrator | 2025-05-05 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:30.789728 | orchestrator | 2025-05-05 00:45:30 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:30.792995 | orchestrator | 2025-05-05 00:45:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:30.794690 | orchestrator | 2025-05-05 00:45:30 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:33.843178 | orchestrator | 2025-05-05 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:33.843322 | orchestrator | 2025-05-05 00:45:33 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:33.848570 | orchestrator | 2025-05-05 00:45:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:33.850132 | orchestrator | 2025-05-05 00:45:33 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:33.851159 | orchestrator | 2025-05-05 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:36.889715 | orchestrator | 2025-05-05 00:45:36 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:36.890983 | orchestrator | 2025-05-05 00:45:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:36.893788 | orchestrator | 2025-05-05 00:45:36 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:39.934005 | orchestrator | 2025-05-05 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:39.934240 | orchestrator | 2025-05-05 00:45:39 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:39.936076 | orchestrator | 2025-05-05 00:45:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:39.938970 | orchestrator | 2025-05-05 00:45:39 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:42.984553 | orchestrator | 2025-05-05 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:42.984687 | orchestrator | 2025-05-05 00:45:42 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:42.986092 | orchestrator | 2025-05-05 00:45:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:42.987207 | orchestrator | 2025-05-05 00:45:42 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:46.037281 | orchestrator | 2025-05-05 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:46.037437 | orchestrator | 2025-05-05 00:45:46 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:46.040984 | orchestrator | 2025-05-05 00:45:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:46.041661 | orchestrator | 2025-05-05 00:45:46 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:46.041902 | orchestrator | 2025-05-05 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:49.096913 | orchestrator | 2025-05-05 00:45:49 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:49.100259 | orchestrator | 2025-05-05 00:45:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:49.101623 | orchestrator | 2025-05-05 00:45:49 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:52.154296 | orchestrator | 2025-05-05 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:52.154441 | orchestrator | 2025-05-05 00:45:52 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:52.155144 | orchestrator | 2025-05-05 00:45:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:52.157554 | orchestrator | 2025-05-05 00:45:52 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:52.157768 | orchestrator | 2025-05-05 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:55.203725 | orchestrator | 2025-05-05 00:45:55 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:55.204473 | orchestrator | 2025-05-05 00:45:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:55.205446 | orchestrator | 2025-05-05 00:45:55 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:55.205556 | orchestrator | 2025-05-05 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:45:58.250627 | orchestrator | 2025-05-05 00:45:58 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:45:58.251306 | orchestrator | 2025-05-05 00:45:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:45:58.252473 | orchestrator | 2025-05-05 00:45:58 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state STARTED 2025-05-05 00:45:58.252702 | orchestrator | 2025-05-05 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:01.292838 | orchestrator | 2025-05-05 00:46:01 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:01.294836 | orchestrator | 2025-05-05 00:46:01 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:01.297349 | orchestrator | 2025-05-05 00:46:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:01.297398 | orchestrator | 2025-05-05 00:46:01 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:01.297424 | orchestrator | 2025-05-05 00:46:01 | INFO  | Task 800ad5aa-0f87-41f9-b36c-347d78b42a7b is in state SUCCESS 2025-05-05 00:46:01.299798 | orchestrator | 2025-05-05 00:46:01.299861 | orchestrator | 2025-05-05 00:46:01.299877 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-05 00:46:01.299922 | orchestrator | 2025-05-05 00:46:01.299939 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-05 00:46:01.299954 | orchestrator | Monday 05 May 2025 00:44:08 +0000 (0:00:00.176) 0:00:00.176 ************ 2025-05-05 00:46:01.300037 | orchestrator | ok: [testbed-manager] 2025-05-05 00:46:01.300056 | orchestrator | 2025-05-05 00:46:01.300070 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-05 00:46:01.300085 | orchestrator | Monday 05 May 2025 00:44:09 +0000 (0:00:00.875) 0:00:01.052 ************ 2025-05-05 00:46:01.300100 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-05 00:46:01.300121 | orchestrator | 2025-05-05 00:46:01.300135 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-05 00:46:01.300149 | orchestrator | Monday 05 May 2025 00:44:10 +0000 (0:00:00.807) 0:00:01.860 ************ 2025-05-05 00:46:01.300163 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.300178 | orchestrator | 2025-05-05 00:46:01.300192 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-05 00:46:01.300232 | orchestrator | Monday 05 May 2025 00:44:11 +0000 (0:00:01.647) 0:00:03.508 ************ 2025-05-05 00:46:01.300246 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-05 00:46:01.300261 | orchestrator | ok: [testbed-manager] 2025-05-05 00:46:01.300275 | orchestrator | 2025-05-05 00:46:01.300289 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-05 00:46:01.300303 | orchestrator | Monday 05 May 2025 00:45:03 +0000 (0:00:51.595) 0:00:55.103 ************ 2025-05-05 00:46:01.300317 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.300331 | orchestrator | 2025-05-05 00:46:01.300345 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:46:01.300359 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:46:01.300375 | orchestrator | 2025-05-05 00:46:01.300389 | orchestrator | Monday 05 May 2025 00:45:06 +0000 (0:00:03.657) 0:00:58.760 ************ 2025-05-05 00:46:01.300404 | orchestrator | =============================================================================== 2025-05-05 00:46:01.300417 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 51.60s 2025-05-05 00:46:01.300431 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.66s 2025-05-05 00:46:01.300445 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.65s 2025-05-05 00:46:01.300459 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.88s 2025-05-05 00:46:01.300473 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.81s 2025-05-05 00:46:01.300487 | orchestrator | 2025-05-05 00:46:01.300501 | orchestrator | 2025-05-05 00:46:01.300515 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-05 00:46:01.300530 | orchestrator | 2025-05-05 00:46:01.300544 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-05 00:46:01.300558 | orchestrator | Monday 05 May 2025 00:43:49 +0000 (0:00:00.288) 0:00:00.288 ************ 2025-05-05 00:46:01.300573 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:46:01.300588 | orchestrator | 2025-05-05 00:46:01.300602 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-05 00:46:01.300616 | orchestrator | Monday 05 May 2025 00:43:50 +0000 (0:00:01.164) 0:00:01.453 ************ 2025-05-05 00:46:01.300630 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-05 00:46:01.300644 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-05 00:46:01.300657 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-05 00:46:01.300672 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-05 00:46:01.300685 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-05 00:46:01.300699 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-05 00:46:01.300715 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-05 00:46:01.300729 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-05 00:46:01.300743 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-05 00:46:01.300757 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-05 00:46:01.300771 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-05 00:46:01.300785 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-05 00:46:01.300809 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-05 00:46:01.300832 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-05 00:46:01.300846 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-05 00:46:01.300871 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-05 00:46:01.300891 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-05 00:46:01.300915 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-05 00:46:01.300931 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-05 00:46:01.300951 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-05 00:46:01.301008 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-05 00:46:01.301031 | orchestrator | 2025-05-05 00:46:01.301054 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-05 00:46:01.301077 | orchestrator | Monday 05 May 2025 00:43:53 +0000 (0:00:03.593) 0:00:05.047 ************ 2025-05-05 00:46:01.301098 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:46:01.301130 | orchestrator | 2025-05-05 00:46:01.301154 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-05 00:46:01.301181 | orchestrator | Monday 05 May 2025 00:43:55 +0000 (0:00:01.532) 0:00:06.579 ************ 2025-05-05 00:46:01.301209 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.301239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.301255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.301270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.301285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.301347 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.301373 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.301404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301482 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.301624 | orchestrator | 2025-05-05 00:46:01.301639 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-05 00:46:01.301653 | orchestrator | Monday 05 May 2025 00:44:00 +0000 (0:00:04.553) 0:00:11.133 ************ 2025-05-05 00:46:01.301675 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.301690 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301711 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301727 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:46:01.301741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.301756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.301816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301846 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:46:01.301860 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:46:01.301874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.301889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301928 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:46:01.301942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.301957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.301997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.302012 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:46:01.302142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.302159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.302174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.302188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.302211 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:46:01.302227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.302242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.302256 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:46:01.302270 | orchestrator | 2025-05-05 00:46:01.302284 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-05 00:46:01.302298 | orchestrator | Monday 05 May 2025 00:44:01 +0000 (0:00:01.700) 0:00:12.833 ************ 2025-05-05 00:46:01.302313 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.302335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303510 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303633 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:46:01.303660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.303707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303749 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:46:01.303765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.303782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.303853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303892 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:46:01.303907 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:46:01.303922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.303942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.303957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.304009 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:46:01.304037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.304076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.304095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.304111 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:46:01.304138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-05 00:46:01.304156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.304172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.304190 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:46:01.304206 | orchestrator | 2025-05-05 00:46:01.304223 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-05 00:46:01.304239 | orchestrator | Monday 05 May 2025 00:44:04 +0000 (0:00:02.668) 0:00:15.501 ************ 2025-05-05 00:46:01.304256 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:46:01.304273 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:46:01.304288 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:46:01.304304 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:46:01.304320 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:46:01.304336 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:46:01.304351 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:46:01.304368 | orchestrator | 2025-05-05 00:46:01.304385 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-05 00:46:01.304400 | orchestrator | Monday 05 May 2025 00:44:05 +0000 (0:00:01.076) 0:00:16.578 ************ 2025-05-05 00:46:01.304414 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:46:01.304428 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:46:01.304442 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:46:01.304457 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:46:01.304471 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:46:01.304486 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:46:01.304500 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:46:01.304514 | orchestrator | 2025-05-05 00:46:01.304529 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-05 00:46:01.304543 | orchestrator | Monday 05 May 2025 00:44:06 +0000 (0:00:00.925) 0:00:17.503 ************ 2025-05-05 00:46:01.304558 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:46:01.304574 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.304588 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.304603 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.304618 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.304632 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.304654 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.304677 | orchestrator | 2025-05-05 00:46:01.304703 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-05 00:46:01.304727 | orchestrator | Monday 05 May 2025 00:44:41 +0000 (0:00:35.060) 0:00:52.564 ************ 2025-05-05 00:46:01.304754 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:46:01.304789 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:46:01.304813 | orchestrator | ok: [testbed-manager] 2025-05-05 00:46:01.304828 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:46:01.304842 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:46:01.304864 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:46:01.304887 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:46:01.304910 | orchestrator | 2025-05-05 00:46:01.304934 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-05 00:46:01.304959 | orchestrator | Monday 05 May 2025 00:44:44 +0000 (0:00:03.157) 0:00:55.722 ************ 2025-05-05 00:46:01.305012 | orchestrator | ok: [testbed-manager] 2025-05-05 00:46:01.305034 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:46:01.305067 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:46:01.305092 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:46:01.305117 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:46:01.305142 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:46:01.305165 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:46:01.305180 | orchestrator | 2025-05-05 00:46:01.305195 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-05 00:46:01.305210 | orchestrator | Monday 05 May 2025 00:44:45 +0000 (0:00:01.063) 0:00:56.785 ************ 2025-05-05 00:46:01.305224 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:46:01.305238 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:46:01.305253 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:46:01.305266 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:46:01.305280 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:46:01.305294 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:46:01.305308 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:46:01.305322 | orchestrator | 2025-05-05 00:46:01.305337 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-05 00:46:01.305351 | orchestrator | Monday 05 May 2025 00:44:46 +0000 (0:00:00.841) 0:00:57.626 ************ 2025-05-05 00:46:01.305365 | orchestrator | skipping: [testbed-manager] 2025-05-05 00:46:01.305379 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:46:01.305393 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:46:01.305407 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:46:01.305421 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:46:01.305435 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:46:01.305448 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:46:01.305463 | orchestrator | 2025-05-05 00:46:01.305477 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-05 00:46:01.305492 | orchestrator | Monday 05 May 2025 00:44:47 +0000 (0:00:00.882) 0:00:58.509 ************ 2025-05-05 00:46:01.305507 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.305522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.305542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.305567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.305594 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.305625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.305655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305676 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.305739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.305932 | orchestrator | 2025-05-05 00:46:01.305946 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-05 00:46:01.306013 | orchestrator | Monday 05 May 2025 00:44:52 +0000 (0:00:04.791) 0:01:03.301 ************ 2025-05-05 00:46:01.306077 | orchestrator | [WARNING]: Skipped 2025-05-05 00:46:01.306093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-05 00:46:01.306110 | orchestrator | to this access issue: 2025-05-05 00:46:01.306125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-05 00:46:01.306139 | orchestrator | directory 2025-05-05 00:46:01.306154 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 00:46:01.306168 | orchestrator | 2025-05-05 00:46:01.306182 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-05 00:46:01.306197 | orchestrator | Monday 05 May 2025 00:44:52 +0000 (0:00:00.687) 0:01:03.988 ************ 2025-05-05 00:46:01.306211 | orchestrator | [WARNING]: Skipped 2025-05-05 00:46:01.306225 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-05 00:46:01.306239 | orchestrator | to this access issue: 2025-05-05 00:46:01.306254 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-05 00:46:01.306268 | orchestrator | directory 2025-05-05 00:46:01.306282 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 00:46:01.306296 | orchestrator | 2025-05-05 00:46:01.306310 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-05 00:46:01.306325 | orchestrator | Monday 05 May 2025 00:44:53 +0000 (0:00:00.785) 0:01:04.773 ************ 2025-05-05 00:46:01.306348 | orchestrator | [WARNING]: Skipped 2025-05-05 00:46:01.306362 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-05 00:46:01.306385 | orchestrator | to this access issue: 2025-05-05 00:46:01.306410 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-05 00:46:01.306435 | orchestrator | directory 2025-05-05 00:46:01.306457 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 00:46:01.306472 | orchestrator | 2025-05-05 00:46:01.306487 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-05 00:46:01.306505 | orchestrator | Monday 05 May 2025 00:44:54 +0000 (0:00:00.506) 0:01:05.279 ************ 2025-05-05 00:46:01.306528 | orchestrator | [WARNING]: Skipped 2025-05-05 00:46:01.306553 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-05 00:46:01.306575 | orchestrator | to this access issue: 2025-05-05 00:46:01.306590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-05 00:46:01.306605 | orchestrator | directory 2025-05-05 00:46:01.306619 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 00:46:01.306633 | orchestrator | 2025-05-05 00:46:01.306648 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-05 00:46:01.306662 | orchestrator | Monday 05 May 2025 00:44:54 +0000 (0:00:00.463) 0:01:05.743 ************ 2025-05-05 00:46:01.306676 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.306690 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:01.306704 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.306718 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.306733 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.306746 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.306761 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.306775 | orchestrator | 2025-05-05 00:46:01.306789 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-05 00:46:01.306803 | orchestrator | Monday 05 May 2025 00:44:58 +0000 (0:00:03.438) 0:01:09.182 ************ 2025-05-05 00:46:01.306817 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-05 00:46:01.306832 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-05 00:46:01.306846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-05 00:46:01.306860 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-05 00:46:01.306874 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-05 00:46:01.306888 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-05 00:46:01.306902 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-05 00:46:01.306916 | orchestrator | 2025-05-05 00:46:01.306930 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-05 00:46:01.306944 | orchestrator | Monday 05 May 2025 00:45:00 +0000 (0:00:02.597) 0:01:11.779 ************ 2025-05-05 00:46:01.306959 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.306999 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:01.307014 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.307028 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.307043 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.307066 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.307081 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.307095 | orchestrator | 2025-05-05 00:46:01.307109 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-05 00:46:01.307123 | orchestrator | Monday 05 May 2025 00:45:02 +0000 (0:00:01.832) 0:01:13.612 ************ 2025-05-05 00:46:01.307152 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307174 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.307190 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.307220 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.307257 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307285 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307300 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307316 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.307349 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.307379 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.307432 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307448 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:46:01.307478 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307495 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307510 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307525 | orchestrator | 2025-05-05 00:46:01.307540 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-05 00:46:01.307554 | orchestrator | Monday 05 May 2025 00:45:04 +0000 (0:00:02.232) 0:01:15.844 ************ 2025-05-05 00:46:01.307569 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-05 00:46:01.307583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-05 00:46:01.307598 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-05 00:46:01.307619 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-05 00:46:01.307634 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-05 00:46:01.307648 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-05 00:46:01.307663 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-05 00:46:01.307677 | orchestrator | 2025-05-05 00:46:01.307691 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-05 00:46:01.307717 | orchestrator | Monday 05 May 2025 00:45:08 +0000 (0:00:03.305) 0:01:19.150 ************ 2025-05-05 00:46:01.307732 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-05 00:46:01.307746 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-05 00:46:01.307760 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-05 00:46:01.307774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-05 00:46:01.307788 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-05 00:46:01.307802 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-05 00:46:01.307816 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-05 00:46:01.307830 | orchestrator | 2025-05-05 00:46:01.307844 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-05 00:46:01.307858 | orchestrator | Monday 05 May 2025 00:45:11 +0000 (0:00:03.501) 0:01:22.651 ************ 2025-05-05 00:46:01.307878 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307924 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.307946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.307999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.308018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308053 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.308084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-05 00:46:01.308127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:46:01.308275 | orchestrator | 2025-05-05 00:46:01.308289 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-05 00:46:01.308304 | orchestrator | Monday 05 May 2025 00:45:15 +0000 (0:00:03.750) 0:01:26.402 ************ 2025-05-05 00:46:01.308318 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.308338 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:01.308353 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.308368 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.308382 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.308396 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.308415 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.308439 | orchestrator | 2025-05-05 00:46:01.308464 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-05 00:46:01.308488 | orchestrator | Monday 05 May 2025 00:45:16 +0000 (0:00:01.482) 0:01:27.885 ************ 2025-05-05 00:46:01.308508 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.308526 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:01.308550 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.308581 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.308607 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.308631 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.308654 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.308673 | orchestrator | 2025-05-05 00:46:01.308688 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-05 00:46:01.308702 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:01.252) 0:01:29.137 ************ 2025-05-05 00:46:01.308716 | orchestrator | 2025-05-05 00:46:01.308730 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-05 00:46:01.308744 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.051) 0:01:29.188 ************ 2025-05-05 00:46:01.308758 | orchestrator | 2025-05-05 00:46:01.308772 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-05 00:46:01.308785 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.048) 0:01:29.237 ************ 2025-05-05 00:46:01.308816 | orchestrator | 2025-05-05 00:46:01.308830 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-05 00:46:01.308895 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.048) 0:01:29.286 ************ 2025-05-05 00:46:01.308910 | orchestrator | 2025-05-05 00:46:01.308925 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-05 00:46:01.308939 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.157) 0:01:29.443 ************ 2025-05-05 00:46:01.309025 | orchestrator | 2025-05-05 00:46:01.309044 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-05 00:46:01.309058 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.049) 0:01:29.492 ************ 2025-05-05 00:46:01.309072 | orchestrator | 2025-05-05 00:46:01.309087 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-05 00:46:01.309101 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.047) 0:01:29.540 ************ 2025-05-05 00:46:01.309115 | orchestrator | 2025-05-05 00:46:01.309129 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-05 00:46:01.309144 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.061) 0:01:29.602 ************ 2025-05-05 00:46:01.309158 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.309171 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.309184 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.309196 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.309209 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.309221 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:01.309233 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.309246 | orchestrator | 2025-05-05 00:46:01.309258 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-05 00:46:01.309271 | orchestrator | Monday 05 May 2025 00:45:26 +0000 (0:00:07.957) 0:01:37.559 ************ 2025-05-05 00:46:01.309283 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:01.309296 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.309308 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.309321 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.309333 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.309346 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.309358 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.309370 | orchestrator | 2025-05-05 00:46:01.309383 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-05 00:46:01.309395 | orchestrator | Monday 05 May 2025 00:45:52 +0000 (0:00:25.810) 0:02:03.369 ************ 2025-05-05 00:46:01.309408 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:46:01.309420 | orchestrator | ok: [testbed-manager] 2025-05-05 00:46:01.309432 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:46:01.309445 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:46:01.309457 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:46:01.309470 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:46:01.309482 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:46:01.309494 | orchestrator | 2025-05-05 00:46:01.309507 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-05 00:46:01.309520 | orchestrator | Monday 05 May 2025 00:45:54 +0000 (0:00:02.312) 0:02:05.681 ************ 2025-05-05 00:46:01.309532 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:01.309545 | orchestrator | changed: [testbed-manager] 2025-05-05 00:46:01.309557 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:01.309570 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:01.309582 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:46:01.309594 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:46:01.309607 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:46:01.309619 | orchestrator | 2025-05-05 00:46:01.309631 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:46:01.309646 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 00:46:01.309660 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 00:46:01.309673 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 00:46:01.309701 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 00:46:04.340583 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 00:46:04.340723 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 00:46:04.340743 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 00:46:04.340758 | orchestrator | 2025-05-05 00:46:04.340773 | orchestrator | 2025-05-05 00:46:04.340788 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:46:04.340804 | orchestrator | Monday 05 May 2025 00:45:59 +0000 (0:00:04.840) 0:02:10.522 ************ 2025-05-05 00:46:04.340818 | orchestrator | =============================================================================== 2025-05-05 00:46:04.340833 | orchestrator | common : Ensure fluentd image is present for label check --------------- 35.06s 2025-05-05 00:46:04.340847 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 25.81s 2025-05-05 00:46:04.340876 | orchestrator | common : Restart fluentd container -------------------------------------- 7.96s 2025-05-05 00:46:04.340891 | orchestrator | common : Restart cron container ----------------------------------------- 4.84s 2025-05-05 00:46:04.340905 | orchestrator | common : Copying over config.json files for services -------------------- 4.79s 2025-05-05 00:46:04.340920 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.55s 2025-05-05 00:46:04.340934 | orchestrator | common : Check common containers ---------------------------------------- 3.75s 2025-05-05 00:46:04.340948 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.59s 2025-05-05 00:46:04.340962 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.50s 2025-05-05 00:46:04.341007 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.44s 2025-05-05 00:46:04.341023 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.31s 2025-05-05 00:46:04.341038 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.16s 2025-05-05 00:46:04.341053 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.67s 2025-05-05 00:46:04.341067 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.60s 2025-05-05 00:46:04.341081 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.31s 2025-05-05 00:46:04.341096 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.23s 2025-05-05 00:46:04.341111 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.83s 2025-05-05 00:46:04.341126 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.70s 2025-05-05 00:46:04.341143 | orchestrator | common : include_tasks -------------------------------------------------- 1.53s 2025-05-05 00:46:04.341160 | orchestrator | common : Creating log volume -------------------------------------------- 1.48s 2025-05-05 00:46:04.341176 | orchestrator | 2025-05-05 00:46:01 | INFO  | Task 6abce6b7-23ce-4143-a04c-15480383cfa6 is in state STARTED 2025-05-05 00:46:04.341193 | orchestrator | 2025-05-05 00:46:01 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:04.341212 | orchestrator | 2025-05-05 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:04.341259 | orchestrator | 2025-05-05 00:46:04 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:04.341433 | orchestrator | 2025-05-05 00:46:04 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:04.341812 | orchestrator | 2025-05-05 00:46:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:04.341874 | orchestrator | 2025-05-05 00:46:04 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:04.342490 | orchestrator | 2025-05-05 00:46:04 | INFO  | Task 6abce6b7-23ce-4143-a04c-15480383cfa6 is in state STARTED 2025-05-05 00:46:04.342960 | orchestrator | 2025-05-05 00:46:04 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:07.388339 | orchestrator | 2025-05-05 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:07.388458 | orchestrator | 2025-05-05 00:46:07 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:07.388763 | orchestrator | 2025-05-05 00:46:07 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:07.390681 | orchestrator | 2025-05-05 00:46:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:07.391015 | orchestrator | 2025-05-05 00:46:07 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:07.391611 | orchestrator | 2025-05-05 00:46:07 | INFO  | Task 6abce6b7-23ce-4143-a04c-15480383cfa6 is in state STARTED 2025-05-05 00:46:07.392285 | orchestrator | 2025-05-05 00:46:07 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:10.430546 | orchestrator | 2025-05-05 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:10.430706 | orchestrator | 2025-05-05 00:46:10 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:10.431051 | orchestrator | 2025-05-05 00:46:10 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:10.431657 | orchestrator | 2025-05-05 00:46:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:10.432941 | orchestrator | 2025-05-05 00:46:10 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:10.433460 | orchestrator | 2025-05-05 00:46:10 | INFO  | Task 6abce6b7-23ce-4143-a04c-15480383cfa6 is in state STARTED 2025-05-05 00:46:10.435057 | orchestrator | 2025-05-05 00:46:10 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:10.435316 | orchestrator | 2025-05-05 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:13.469498 | orchestrator | 2025-05-05 00:46:13 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:13.471537 | orchestrator | 2025-05-05 00:46:13 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:13.473769 | orchestrator | 2025-05-05 00:46:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:13.474121 | orchestrator | 2025-05-05 00:46:13 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:13.474816 | orchestrator | 2025-05-05 00:46:13 | INFO  | Task 6abce6b7-23ce-4143-a04c-15480383cfa6 is in state STARTED 2025-05-05 00:46:13.476533 | orchestrator | 2025-05-05 00:46:13 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:16.524908 | orchestrator | 2025-05-05 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:16.525095 | orchestrator | 2025-05-05 00:46:16 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:16.525820 | orchestrator | 2025-05-05 00:46:16 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:16.526691 | orchestrator | 2025-05-05 00:46:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:16.527194 | orchestrator | 2025-05-05 00:46:16 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:16.528240 | orchestrator | 2025-05-05 00:46:16 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:16.528475 | orchestrator | 2025-05-05 00:46:16 | INFO  | Task 6abce6b7-23ce-4143-a04c-15480383cfa6 is in state SUCCESS 2025-05-05 00:46:16.529858 | orchestrator | 2025-05-05 00:46:16 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:16.529965 | orchestrator | 2025-05-05 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:19.561564 | orchestrator | 2025-05-05 00:46:19 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:19.562392 | orchestrator | 2025-05-05 00:46:19 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:19.563739 | orchestrator | 2025-05-05 00:46:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:19.565227 | orchestrator | 2025-05-05 00:46:19 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:19.567519 | orchestrator | 2025-05-05 00:46:19 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:19.568340 | orchestrator | 2025-05-05 00:46:19 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:19.568422 | orchestrator | 2025-05-05 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:22.607211 | orchestrator | 2025-05-05 00:46:22 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:22.608280 | orchestrator | 2025-05-05 00:46:22 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:22.609515 | orchestrator | 2025-05-05 00:46:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:22.610528 | orchestrator | 2025-05-05 00:46:22 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:22.611363 | orchestrator | 2025-05-05 00:46:22 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:22.612329 | orchestrator | 2025-05-05 00:46:22 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:22.612602 | orchestrator | 2025-05-05 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:25.665257 | orchestrator | 2025-05-05 00:46:25 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:25.666278 | orchestrator | 2025-05-05 00:46:25 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:25.669873 | orchestrator | 2025-05-05 00:46:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:25.670926 | orchestrator | 2025-05-05 00:46:25 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:25.675583 | orchestrator | 2025-05-05 00:46:25 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:25.676372 | orchestrator | 2025-05-05 00:46:25 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:25.677159 | orchestrator | 2025-05-05 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:28.726086 | orchestrator | 2025-05-05 00:46:28 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:28.727656 | orchestrator | 2025-05-05 00:46:28 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:28.728368 | orchestrator | 2025-05-05 00:46:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:28.730443 | orchestrator | 2025-05-05 00:46:28 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:28.732071 | orchestrator | 2025-05-05 00:46:28 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:28.732576 | orchestrator | 2025-05-05 00:46:28 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:28.732686 | orchestrator | 2025-05-05 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:31.755820 | orchestrator | 2025-05-05 00:46:31 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:31.756210 | orchestrator | 2025-05-05 00:46:31 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:31.756254 | orchestrator | 2025-05-05 00:46:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:31.756730 | orchestrator | 2025-05-05 00:46:31 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:31.760942 | orchestrator | 2025-05-05 00:46:31 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:34.790548 | orchestrator | 2025-05-05 00:46:31 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:34.790656 | orchestrator | 2025-05-05 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:34.790694 | orchestrator | 2025-05-05 00:46:34 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:34.791578 | orchestrator | 2025-05-05 00:46:34 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:34.792127 | orchestrator | 2025-05-05 00:46:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:34.792720 | orchestrator | 2025-05-05 00:46:34 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:34.793247 | orchestrator | 2025-05-05 00:46:34 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:34.793695 | orchestrator | 2025-05-05 00:46:34 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state STARTED 2025-05-05 00:46:34.793798 | orchestrator | 2025-05-05 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:37.847711 | orchestrator | 2025-05-05 00:46:37 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:37.850552 | orchestrator | 2025-05-05 00:46:37 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:37.851344 | orchestrator | 2025-05-05 00:46:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:37.851847 | orchestrator | 2025-05-05 00:46:37 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:37.855992 | orchestrator | 2025-05-05 00:46:37 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:37.856538 | orchestrator | 2025-05-05 00:46:37 | INFO  | Task 2136897f-4d6c-413a-9e95-e597dd17d63e is in state SUCCESS 2025-05-05 00:46:37.857530 | orchestrator | 2025-05-05 00:46:37.857562 | orchestrator | 2025-05-05 00:46:37.857577 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:46:37.857593 | orchestrator | 2025-05-05 00:46:37.857608 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:46:37.857622 | orchestrator | Monday 05 May 2025 00:46:03 +0000 (0:00:00.253) 0:00:00.253 ************ 2025-05-05 00:46:37.857658 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:46:37.857675 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:46:37.857689 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:46:37.857704 | orchestrator | 2025-05-05 00:46:37.857718 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:46:37.857732 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.293) 0:00:00.547 ************ 2025-05-05 00:46:37.857747 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-05 00:46:37.857761 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-05 00:46:37.857776 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-05 00:46:37.857790 | orchestrator | 2025-05-05 00:46:37.857804 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-05 00:46:37.857818 | orchestrator | 2025-05-05 00:46:37.857832 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-05 00:46:37.857847 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.257) 0:00:00.804 ************ 2025-05-05 00:46:37.857861 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:46:37.857876 | orchestrator | 2025-05-05 00:46:37.857890 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-05 00:46:37.857904 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.500) 0:00:01.304 ************ 2025-05-05 00:46:37.857918 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-05 00:46:37.857932 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-05 00:46:37.857946 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-05 00:46:37.857960 | orchestrator | 2025-05-05 00:46:37.857974 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-05 00:46:37.857988 | orchestrator | Monday 05 May 2025 00:46:05 +0000 (0:00:00.903) 0:00:02.208 ************ 2025-05-05 00:46:37.858002 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-05 00:46:37.858096 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-05 00:46:37.858113 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-05 00:46:37.858128 | orchestrator | 2025-05-05 00:46:37.858144 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-05 00:46:37.858160 | orchestrator | Monday 05 May 2025 00:46:07 +0000 (0:00:01.634) 0:00:03.843 ************ 2025-05-05 00:46:37.858176 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:37.858204 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:37.858219 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:37.858236 | orchestrator | 2025-05-05 00:46:37.858257 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-05 00:46:37.858271 | orchestrator | Monday 05 May 2025 00:46:10 +0000 (0:00:02.786) 0:00:06.629 ************ 2025-05-05 00:46:37.858285 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:37.858300 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:37.858314 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:37.858328 | orchestrator | 2025-05-05 00:46:37.858342 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:46:37.858356 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:46:37.858371 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:46:37.858386 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:46:37.858400 | orchestrator | 2025-05-05 00:46:37.858414 | orchestrator | 2025-05-05 00:46:37.858428 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:46:37.858442 | orchestrator | Monday 05 May 2025 00:46:14 +0000 (0:00:04.225) 0:00:10.855 ************ 2025-05-05 00:46:37.858466 | orchestrator | =============================================================================== 2025-05-05 00:46:37.858480 | orchestrator | memcached : Restart memcached container --------------------------------- 4.23s 2025-05-05 00:46:37.858494 | orchestrator | memcached : Check memcached container ----------------------------------- 2.79s 2025-05-05 00:46:37.858508 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.63s 2025-05-05 00:46:37.858522 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.90s 2025-05-05 00:46:37.858536 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2025-05-05 00:46:37.858550 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-05-05 00:46:37.858575 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.26s 2025-05-05 00:46:37.858590 | orchestrator | 2025-05-05 00:46:37.858604 | orchestrator | 2025-05-05 00:46:37.858618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:46:37.858632 | orchestrator | 2025-05-05 00:46:37.858647 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:46:37.858661 | orchestrator | Monday 05 May 2025 00:46:03 +0000 (0:00:00.272) 0:00:00.272 ************ 2025-05-05 00:46:37.858675 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:46:37.858689 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:46:37.858703 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:46:37.858718 | orchestrator | 2025-05-05 00:46:37.858732 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:46:37.858757 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.344) 0:00:00.617 ************ 2025-05-05 00:46:37.858772 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-05 00:46:37.858786 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-05 00:46:37.858801 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-05 00:46:37.858815 | orchestrator | 2025-05-05 00:46:37.858829 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-05 00:46:37.858843 | orchestrator | 2025-05-05 00:46:37.858857 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-05 00:46:37.858871 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.316) 0:00:00.933 ************ 2025-05-05 00:46:37.858885 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:46:37.858900 | orchestrator | 2025-05-05 00:46:37.858913 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-05 00:46:37.858932 | orchestrator | Monday 05 May 2025 00:46:05 +0000 (0:00:00.881) 0:00:01.815 ************ 2025-05-05 00:46:37.858948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.858969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.858984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859081 | orchestrator | 2025-05-05 00:46:37.859097 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-05 00:46:37.859112 | orchestrator | Monday 05 May 2025 00:46:06 +0000 (0:00:01.606) 0:00:03.421 ************ 2025-05-05 00:46:37.859127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859243 | orchestrator | 2025-05-05 00:46:37.859257 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-05 00:46:37.859272 | orchestrator | Monday 05 May 2025 00:46:09 +0000 (0:00:02.726) 0:00:06.148 ************ 2025-05-05 00:46:37.859286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859480 | orchestrator | 2025-05-05 00:46:37.859507 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-05 00:46:37.859535 | orchestrator | Monday 05 May 2025 00:46:13 +0000 (0:00:03.432) 0:00:09.580 ************ 2025-05-05 00:46:37.859557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:37.859650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-05 00:46:40.896698 | orchestrator | 2025-05-05 00:46:40.896823 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-05 00:46:40.896844 | orchestrator | Monday 05 May 2025 00:46:15 +0000 (0:00:02.127) 0:00:11.708 ************ 2025-05-05 00:46:40.896860 | orchestrator | 2025-05-05 00:46:40.896874 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-05 00:46:40.896891 | orchestrator | Monday 05 May 2025 00:46:15 +0000 (0:00:00.105) 0:00:11.813 ************ 2025-05-05 00:46:40.896905 | orchestrator | 2025-05-05 00:46:40.896920 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-05 00:46:40.896934 | orchestrator | Monday 05 May 2025 00:46:15 +0000 (0:00:00.188) 0:00:12.002 ************ 2025-05-05 00:46:40.896948 | orchestrator | 2025-05-05 00:46:40.896962 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-05 00:46:40.897006 | orchestrator | Monday 05 May 2025 00:46:15 +0000 (0:00:00.490) 0:00:12.492 ************ 2025-05-05 00:46:40.897021 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:40.897043 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:40.897064 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:40.897126 | orchestrator | 2025-05-05 00:46:40.897142 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-05 00:46:40.897156 | orchestrator | Monday 05 May 2025 00:46:25 +0000 (0:00:09.874) 0:00:22.367 ************ 2025-05-05 00:46:40.897171 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:46:40.897185 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:46:40.897259 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:46:40.897275 | orchestrator | 2025-05-05 00:46:40.897292 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:46:40.897308 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:46:40.897326 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:46:40.897340 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:46:40.897355 | orchestrator | 2025-05-05 00:46:40.897369 | orchestrator | 2025-05-05 00:46:40.897383 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:46:40.897397 | orchestrator | Monday 05 May 2025 00:46:36 +0000 (0:00:10.449) 0:00:32.816 ************ 2025-05-05 00:46:40.897411 | orchestrator | =============================================================================== 2025-05-05 00:46:40.897425 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.45s 2025-05-05 00:46:40.897439 | orchestrator | redis : Restart redis container ----------------------------------------- 9.87s 2025-05-05 00:46:40.897453 | orchestrator | redis : Copying over redis config files --------------------------------- 3.43s 2025-05-05 00:46:40.897475 | orchestrator | redis : Copying over default config.json files -------------------------- 2.73s 2025-05-05 00:46:40.897498 | orchestrator | redis : Check redis containers ------------------------------------------ 2.13s 2025-05-05 00:46:40.897519 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.61s 2025-05-05 00:46:40.897540 | orchestrator | redis : include_tasks --------------------------------------------------- 0.88s 2025-05-05 00:46:40.897564 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.78s 2025-05-05 00:46:40.897588 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-05-05 00:46:40.897610 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2025-05-05 00:46:40.897634 | orchestrator | 2025-05-05 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:40.897667 | orchestrator | 2025-05-05 00:46:40 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:40.897768 | orchestrator | 2025-05-05 00:46:40 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:40.898615 | orchestrator | 2025-05-05 00:46:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:40.899130 | orchestrator | 2025-05-05 00:46:40 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:40.900128 | orchestrator | 2025-05-05 00:46:40 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:43.941017 | orchestrator | 2025-05-05 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:43.941196 | orchestrator | 2025-05-05 00:46:43 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:43.941877 | orchestrator | 2025-05-05 00:46:43 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:43.942552 | orchestrator | 2025-05-05 00:46:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:43.943593 | orchestrator | 2025-05-05 00:46:43 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:43.944653 | orchestrator | 2025-05-05 00:46:43 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:46.980750 | orchestrator | 2025-05-05 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:46.980879 | orchestrator | 2025-05-05 00:46:46 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:46.981929 | orchestrator | 2025-05-05 00:46:46 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:46.986688 | orchestrator | 2025-05-05 00:46:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:46.987245 | orchestrator | 2025-05-05 00:46:46 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:46.987321 | orchestrator | 2025-05-05 00:46:46 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:50.037038 | orchestrator | 2025-05-05 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:50.037197 | orchestrator | 2025-05-05 00:46:50 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:50.037532 | orchestrator | 2025-05-05 00:46:50 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:50.038370 | orchestrator | 2025-05-05 00:46:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:50.039040 | orchestrator | 2025-05-05 00:46:50 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:50.041761 | orchestrator | 2025-05-05 00:46:50 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:53.090977 | orchestrator | 2025-05-05 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:53.091146 | orchestrator | 2025-05-05 00:46:53 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:53.094423 | orchestrator | 2025-05-05 00:46:53 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:53.096037 | orchestrator | 2025-05-05 00:46:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:53.097496 | orchestrator | 2025-05-05 00:46:53 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:53.100369 | orchestrator | 2025-05-05 00:46:53 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:56.148363 | orchestrator | 2025-05-05 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:56.148500 | orchestrator | 2025-05-05 00:46:56 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:56.149502 | orchestrator | 2025-05-05 00:46:56 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:56.151852 | orchestrator | 2025-05-05 00:46:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:56.155641 | orchestrator | 2025-05-05 00:46:56 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:56.156182 | orchestrator | 2025-05-05 00:46:56 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:56.156361 | orchestrator | 2025-05-05 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:46:59.214903 | orchestrator | 2025-05-05 00:46:59 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:46:59.215234 | orchestrator | 2025-05-05 00:46:59 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:46:59.215270 | orchestrator | 2025-05-05 00:46:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:46:59.215293 | orchestrator | 2025-05-05 00:46:59 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:46:59.216154 | orchestrator | 2025-05-05 00:46:59 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:46:59.216280 | orchestrator | 2025-05-05 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:02.254391 | orchestrator | 2025-05-05 00:47:02 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:02.254770 | orchestrator | 2025-05-05 00:47:02 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:47:02.256574 | orchestrator | 2025-05-05 00:47:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:02.257842 | orchestrator | 2025-05-05 00:47:02 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:02.264577 | orchestrator | 2025-05-05 00:47:02 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:05.305832 | orchestrator | 2025-05-05 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:05.306006 | orchestrator | 2025-05-05 00:47:05 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:05.307097 | orchestrator | 2025-05-05 00:47:05 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:47:05.309314 | orchestrator | 2025-05-05 00:47:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:05.310742 | orchestrator | 2025-05-05 00:47:05 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:05.312716 | orchestrator | 2025-05-05 00:47:05 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:08.364212 | orchestrator | 2025-05-05 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:08.364368 | orchestrator | 2025-05-05 00:47:08 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:08.370903 | orchestrator | 2025-05-05 00:47:08 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:47:08.379512 | orchestrator | 2025-05-05 00:47:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:08.381965 | orchestrator | 2025-05-05 00:47:08 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:08.382011 | orchestrator | 2025-05-05 00:47:08 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:08.382382 | orchestrator | 2025-05-05 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:11.431406 | orchestrator | 2025-05-05 00:47:11 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:11.431737 | orchestrator | 2025-05-05 00:47:11 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:47:11.433479 | orchestrator | 2025-05-05 00:47:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:11.433932 | orchestrator | 2025-05-05 00:47:11 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:11.434649 | orchestrator | 2025-05-05 00:47:11 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:11.438277 | orchestrator | 2025-05-05 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:14.469001 | orchestrator | 2025-05-05 00:47:14 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:14.470321 | orchestrator | 2025-05-05 00:47:14 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state STARTED 2025-05-05 00:47:14.471501 | orchestrator | 2025-05-05 00:47:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:14.473810 | orchestrator | 2025-05-05 00:47:14 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:14.475151 | orchestrator | 2025-05-05 00:47:14 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:14.475463 | orchestrator | 2025-05-05 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:17.514133 | orchestrator | 2025-05-05 00:47:17 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:17.515026 | orchestrator | 2025-05-05 00:47:17.515061 | orchestrator | 2025-05-05 00:47:17.515074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:47:17.515086 | orchestrator | 2025-05-05 00:47:17.515098 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:47:17.515110 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.282) 0:00:00.282 ************ 2025-05-05 00:47:17.515121 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:47:17.515145 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:47:17.515208 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:47:17.515280 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:47:17.515293 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:47:17.515305 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:47:17.515317 | orchestrator | 2025-05-05 00:47:17.515329 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:47:17.515341 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.519) 0:00:00.802 ************ 2025-05-05 00:47:17.515353 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-05 00:47:17.515373 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-05 00:47:17.515395 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-05 00:47:17.515409 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-05 00:47:17.515420 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-05 00:47:17.515436 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-05 00:47:17.515447 | orchestrator | 2025-05-05 00:47:17.515461 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-05 00:47:17.515473 | orchestrator | 2025-05-05 00:47:17.515484 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-05 00:47:17.515496 | orchestrator | Monday 05 May 2025 00:46:06 +0000 (0:00:01.143) 0:00:01.945 ************ 2025-05-05 00:47:17.515508 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:47:17.515520 | orchestrator | 2025-05-05 00:47:17.515532 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-05 00:47:17.515543 | orchestrator | Monday 05 May 2025 00:46:07 +0000 (0:00:01.315) 0:00:03.260 ************ 2025-05-05 00:47:17.515554 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-05 00:47:17.515566 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-05 00:47:17.515594 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-05 00:47:17.515606 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-05 00:47:17.515618 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-05 00:47:17.515629 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-05 00:47:17.515640 | orchestrator | 2025-05-05 00:47:17.515653 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-05 00:47:17.515666 | orchestrator | Monday 05 May 2025 00:46:08 +0000 (0:00:01.388) 0:00:04.649 ************ 2025-05-05 00:47:17.515678 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-05 00:47:17.515695 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-05 00:47:17.515707 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-05 00:47:17.515719 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-05 00:47:17.515732 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-05 00:47:17.515744 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-05 00:47:17.515775 | orchestrator | 2025-05-05 00:47:17.515793 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-05 00:47:17.515811 | orchestrator | Monday 05 May 2025 00:46:11 +0000 (0:00:02.485) 0:00:07.134 ************ 2025-05-05 00:47:17.515832 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-05 00:47:17.515853 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:47:17.515872 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-05 00:47:17.515886 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:47:17.515899 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-05 00:47:17.515911 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:47:17.515924 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-05 00:47:17.515936 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:47:17.515985 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-05 00:47:17.516031 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:47:17.516044 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-05 00:47:17.516063 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:47:17.516075 | orchestrator | 2025-05-05 00:47:17.516087 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-05 00:47:17.516098 | orchestrator | Monday 05 May 2025 00:46:12 +0000 (0:00:01.512) 0:00:08.647 ************ 2025-05-05 00:47:17.516109 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:47:17.516120 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:47:17.516132 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:47:17.516143 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:47:17.516154 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:47:17.516187 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:47:17.516200 | orchestrator | 2025-05-05 00:47:17.516211 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-05 00:47:17.516223 | orchestrator | Monday 05 May 2025 00:46:13 +0000 (0:00:00.724) 0:00:09.371 ************ 2025-05-05 00:47:17.516248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516517 | orchestrator | 2025-05-05 00:47:17.516528 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-05 00:47:17.516546 | orchestrator | Monday 05 May 2025 00:46:15 +0000 (0:00:02.345) 0:00:11.717 ************ 2025-05-05 00:47:17.516558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.516869 | orchestrator | 2025-05-05 00:47:17.516881 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-05 00:47:17.516893 | orchestrator | Monday 05 May 2025 00:46:19 +0000 (0:00:03.498) 0:00:15.215 ************ 2025-05-05 00:47:17.516904 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:47:17.516916 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:47:17.516928 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:47:17.516939 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:47:17.516950 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:47:17.516961 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:47:17.516973 | orchestrator | 2025-05-05 00:47:17.516985 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-05 00:47:17.516996 | orchestrator | Monday 05 May 2025 00:46:21 +0000 (0:00:02.310) 0:00:17.527 ************ 2025-05-05 00:47:17.517008 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:47:17.517019 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:47:17.517030 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:47:17.517042 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:47:17.517053 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:47:17.517064 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:47:17.517076 | orchestrator | 2025-05-05 00:47:17.517087 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-05 00:47:17.517098 | orchestrator | Monday 05 May 2025 00:46:24 +0000 (0:00:02.687) 0:00:20.214 ************ 2025-05-05 00:47:17.517123 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:47:17.517135 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:47:17.517155 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:47:17.517221 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:47:17.517234 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:47:17.517246 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:47:17.517258 | orchestrator | 2025-05-05 00:47:17.517269 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-05 00:47:17.517281 | orchestrator | Monday 05 May 2025 00:46:26 +0000 (0:00:02.273) 0:00:22.487 ************ 2025-05-05 00:47:17.517293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-05 00:47:17.517486 | orchestrator | 2025-05-05 00:47:17.517498 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-05 00:47:17.517510 | orchestrator | Monday 05 May 2025 00:46:30 +0000 (0:00:03.455) 0:00:25.943 ************ 2025-05-05 00:47:17.517527 | orchestrator | 2025-05-05 00:47:17.517539 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-05 00:47:17.517550 | orchestrator | Monday 05 May 2025 00:46:30 +0000 (0:00:00.141) 0:00:26.084 ************ 2025-05-05 00:47:17.517562 | orchestrator | 2025-05-05 00:47:17.517573 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-05 00:47:17.517584 | orchestrator | Monday 05 May 2025 00:46:30 +0000 (0:00:00.264) 0:00:26.349 ************ 2025-05-05 00:47:17.517594 | orchestrator | 2025-05-05 00:47:17.517605 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-05 00:47:17.517615 | orchestrator | Monday 05 May 2025 00:46:30 +0000 (0:00:00.219) 0:00:26.568 ************ 2025-05-05 00:47:17.517625 | orchestrator | 2025-05-05 00:47:17.517640 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-05 00:47:17.517651 | orchestrator | Monday 05 May 2025 00:46:30 +0000 (0:00:00.305) 0:00:26.873 ************ 2025-05-05 00:47:17.517661 | orchestrator | 2025-05-05 00:47:17.517671 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-05 00:47:17.517682 | orchestrator | Monday 05 May 2025 00:46:31 +0000 (0:00:00.122) 0:00:26.996 ************ 2025-05-05 00:47:17.517692 | orchestrator | 2025-05-05 00:47:17.517703 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-05 00:47:17.517713 | orchestrator | Monday 05 May 2025 00:46:31 +0000 (0:00:00.220) 0:00:27.217 ************ 2025-05-05 00:47:17.517723 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:47:17.517734 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:47:17.517744 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:47:17.517755 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:47:17.517765 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:47:17.517775 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:47:17.517786 | orchestrator | 2025-05-05 00:47:17.517796 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-05 00:47:17.517807 | orchestrator | Monday 05 May 2025 00:46:41 +0000 (0:00:09.757) 0:00:36.974 ************ 2025-05-05 00:47:17.517821 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:47:17.517832 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:47:17.517842 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:47:17.517853 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:47:17.517863 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:47:17.517873 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:47:17.517883 | orchestrator | 2025-05-05 00:47:17.517894 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-05 00:47:17.517904 | orchestrator | Monday 05 May 2025 00:46:43 +0000 (0:00:02.149) 0:00:39.124 ************ 2025-05-05 00:47:17.517914 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:47:17.517925 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:47:17.517935 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:47:17.517945 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:47:17.517956 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:47:17.517972 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:47:17.517983 | orchestrator | 2025-05-05 00:47:17.517994 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-05 00:47:17.518004 | orchestrator | Monday 05 May 2025 00:46:53 +0000 (0:00:10.343) 0:00:49.467 ************ 2025-05-05 00:47:17.518048 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-05 00:47:17.518062 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-05 00:47:17.518073 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-05 00:47:17.518084 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-05 00:47:17.518094 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-05 00:47:17.518110 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-05 00:47:17.518124 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-05 00:47:17.518135 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-05 00:47:17.518145 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-05 00:47:17.518155 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-05 00:47:17.518182 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-05 00:47:17.518193 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-05 00:47:17.518204 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-05 00:47:17.518214 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-05 00:47:17.518224 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-05 00:47:17.518235 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-05 00:47:17.518245 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-05 00:47:17.518255 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-05 00:47:17.518265 | orchestrator | 2025-05-05 00:47:17.518276 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-05 00:47:17.518286 | orchestrator | Monday 05 May 2025 00:47:01 +0000 (0:00:07.660) 0:00:57.128 ************ 2025-05-05 00:47:17.518296 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-05 00:47:17.518307 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:47:17.518318 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-05 00:47:17.518328 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:47:17.518339 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-05 00:47:17.518349 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:47:17.518359 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-05 00:47:17.518369 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-05 00:47:17.518379 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-05 00:47:17.518389 | orchestrator | 2025-05-05 00:47:17.518400 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-05 00:47:17.518410 | orchestrator | Monday 05 May 2025 00:47:04 +0000 (0:00:02.965) 0:01:00.093 ************ 2025-05-05 00:47:17.518420 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-05 00:47:17.518430 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:47:17.518441 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-05 00:47:17.518451 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:47:17.518461 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-05 00:47:17.518472 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:47:17.518482 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-05 00:47:17.518498 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-05 00:47:20.551333 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-05 00:47:20.551439 | orchestrator | 2025-05-05 00:47:20.551460 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-05 00:47:20.551504 | orchestrator | Monday 05 May 2025 00:47:08 +0000 (0:00:03.892) 0:01:03.986 ************ 2025-05-05 00:47:20.551519 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:47:20.551534 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:47:20.551549 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:47:20.551562 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:47:20.551576 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:47:20.551590 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:47:20.551604 | orchestrator | 2025-05-05 00:47:20.551619 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:47:20.551634 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:47:20.551649 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:47:20.551663 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:47:20.551677 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:47:20.551691 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:47:20.551717 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:47:20.551731 | orchestrator | 2025-05-05 00:47:20.551746 | orchestrator | 2025-05-05 00:47:20.551760 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:47:20.551774 | orchestrator | Monday 05 May 2025 00:47:15 +0000 (0:00:07.792) 0:01:11.778 ************ 2025-05-05 00:47:20.551788 | orchestrator | =============================================================================== 2025-05-05 00:47:20.551802 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.14s 2025-05-05 00:47:20.551816 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.76s 2025-05-05 00:47:20.551830 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.66s 2025-05-05 00:47:20.551844 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.89s 2025-05-05 00:47:20.551858 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.50s 2025-05-05 00:47:20.551874 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.46s 2025-05-05 00:47:20.551889 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.97s 2025-05-05 00:47:20.551905 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.69s 2025-05-05 00:47:20.551922 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.49s 2025-05-05 00:47:20.551938 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.35s 2025-05-05 00:47:20.551960 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.31s 2025-05-05 00:47:20.551975 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.27s 2025-05-05 00:47:20.551991 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.15s 2025-05-05 00:47:20.552007 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.51s 2025-05-05 00:47:20.552023 | orchestrator | module-load : Load modules ---------------------------------------------- 1.39s 2025-05-05 00:47:20.552039 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.32s 2025-05-05 00:47:20.552054 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.27s 2025-05-05 00:47:20.552076 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2025-05-05 00:47:20.552092 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.72s 2025-05-05 00:47:20.552107 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-05-05 00:47:20.552122 | orchestrator | 2025-05-05 00:47:17 | INFO  | Task fb3ff896-d636-4b4a-9284-bfdc4d7da89e is in state SUCCESS 2025-05-05 00:47:20.552138 | orchestrator | 2025-05-05 00:47:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:20.552154 | orchestrator | 2025-05-05 00:47:17 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:20.552195 | orchestrator | 2025-05-05 00:47:17 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:20.552213 | orchestrator | 2025-05-05 00:47:17 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:20.552230 | orchestrator | 2025-05-05 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:20.552259 | orchestrator | 2025-05-05 00:47:20 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:20.552773 | orchestrator | 2025-05-05 00:47:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:20.552872 | orchestrator | 2025-05-05 00:47:20 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:20.552909 | orchestrator | 2025-05-05 00:47:20 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:20.553246 | orchestrator | 2025-05-05 00:47:20 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:23.585320 | orchestrator | 2025-05-05 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:23.586114 | orchestrator | 2025-05-05 00:47:23 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:23.586675 | orchestrator | 2025-05-05 00:47:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:23.586713 | orchestrator | 2025-05-05 00:47:23 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:23.587248 | orchestrator | 2025-05-05 00:47:23 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:23.588029 | orchestrator | 2025-05-05 00:47:23 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:26.633504 | orchestrator | 2025-05-05 00:47:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:26.633685 | orchestrator | 2025-05-05 00:47:26 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:26.633771 | orchestrator | 2025-05-05 00:47:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:26.634524 | orchestrator | 2025-05-05 00:47:26 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:26.635131 | orchestrator | 2025-05-05 00:47:26 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:26.635861 | orchestrator | 2025-05-05 00:47:26 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:29.669686 | orchestrator | 2025-05-05 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:29.669814 | orchestrator | 2025-05-05 00:47:29 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:29.670591 | orchestrator | 2025-05-05 00:47:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:29.671644 | orchestrator | 2025-05-05 00:47:29 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:29.675310 | orchestrator | 2025-05-05 00:47:29 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:29.677077 | orchestrator | 2025-05-05 00:47:29 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:29.677300 | orchestrator | 2025-05-05 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:32.714256 | orchestrator | 2025-05-05 00:47:32 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:32.714621 | orchestrator | 2025-05-05 00:47:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:32.718745 | orchestrator | 2025-05-05 00:47:32 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:32.719492 | orchestrator | 2025-05-05 00:47:32 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:32.720164 | orchestrator | 2025-05-05 00:47:32 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:35.753381 | orchestrator | 2025-05-05 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:35.753574 | orchestrator | 2025-05-05 00:47:35 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:35.753666 | orchestrator | 2025-05-05 00:47:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:35.754523 | orchestrator | 2025-05-05 00:47:35 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:35.755264 | orchestrator | 2025-05-05 00:47:35 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:35.756408 | orchestrator | 2025-05-05 00:47:35 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:35.756578 | orchestrator | 2025-05-05 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:38.802957 | orchestrator | 2025-05-05 00:47:38 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:38.803201 | orchestrator | 2025-05-05 00:47:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:38.803894 | orchestrator | 2025-05-05 00:47:38 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:38.804547 | orchestrator | 2025-05-05 00:47:38 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:38.805298 | orchestrator | 2025-05-05 00:47:38 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:41.835452 | orchestrator | 2025-05-05 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:41.835703 | orchestrator | 2025-05-05 00:47:41 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:41.835798 | orchestrator | 2025-05-05 00:47:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:41.836330 | orchestrator | 2025-05-05 00:47:41 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:41.837545 | orchestrator | 2025-05-05 00:47:41 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:41.837807 | orchestrator | 2025-05-05 00:47:41 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:44.888045 | orchestrator | 2025-05-05 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:44.888186 | orchestrator | 2025-05-05 00:47:44 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:44.891448 | orchestrator | 2025-05-05 00:47:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:44.891600 | orchestrator | 2025-05-05 00:47:44 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:44.891622 | orchestrator | 2025-05-05 00:47:44 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:44.891637 | orchestrator | 2025-05-05 00:47:44 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:44.891656 | orchestrator | 2025-05-05 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:47.931655 | orchestrator | 2025-05-05 00:47:47 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:47.933704 | orchestrator | 2025-05-05 00:47:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:47.936826 | orchestrator | 2025-05-05 00:47:47 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:47.940565 | orchestrator | 2025-05-05 00:47:47 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:47.942507 | orchestrator | 2025-05-05 00:47:47 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:50.985413 | orchestrator | 2025-05-05 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:50.985577 | orchestrator | 2025-05-05 00:47:50 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:50.986128 | orchestrator | 2025-05-05 00:47:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:50.986166 | orchestrator | 2025-05-05 00:47:50 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:50.986614 | orchestrator | 2025-05-05 00:47:50 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:50.989393 | orchestrator | 2025-05-05 00:47:50 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:54.023673 | orchestrator | 2025-05-05 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:54.023810 | orchestrator | 2025-05-05 00:47:54 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:54.024207 | orchestrator | 2025-05-05 00:47:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:54.024291 | orchestrator | 2025-05-05 00:47:54 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:54.024452 | orchestrator | 2025-05-05 00:47:54 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:54.024993 | orchestrator | 2025-05-05 00:47:54 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:57.065511 | orchestrator | 2025-05-05 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:47:57.065648 | orchestrator | 2025-05-05 00:47:57 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:47:57.065909 | orchestrator | 2025-05-05 00:47:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:47:57.066521 | orchestrator | 2025-05-05 00:47:57 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:47:57.068590 | orchestrator | 2025-05-05 00:47:57 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:47:57.069469 | orchestrator | 2025-05-05 00:47:57 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:47:57.069691 | orchestrator | 2025-05-05 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:00.106010 | orchestrator | 2025-05-05 00:48:00 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:00.107505 | orchestrator | 2025-05-05 00:48:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:00.108495 | orchestrator | 2025-05-05 00:48:00 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:00.109867 | orchestrator | 2025-05-05 00:48:00 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:00.110807 | orchestrator | 2025-05-05 00:48:00 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:03.158638 | orchestrator | 2025-05-05 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:03.158886 | orchestrator | 2025-05-05 00:48:03 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:03.159434 | orchestrator | 2025-05-05 00:48:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:03.159839 | orchestrator | 2025-05-05 00:48:03 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:03.160810 | orchestrator | 2025-05-05 00:48:03 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:03.161888 | orchestrator | 2025-05-05 00:48:03 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:06.203063 | orchestrator | 2025-05-05 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:06.203186 | orchestrator | 2025-05-05 00:48:06 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:06.206117 | orchestrator | 2025-05-05 00:48:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:06.207477 | orchestrator | 2025-05-05 00:48:06 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:06.208368 | orchestrator | 2025-05-05 00:48:06 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:06.209142 | orchestrator | 2025-05-05 00:48:06 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:09.253922 | orchestrator | 2025-05-05 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:09.254118 | orchestrator | 2025-05-05 00:48:09 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:09.254205 | orchestrator | 2025-05-05 00:48:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:09.254228 | orchestrator | 2025-05-05 00:48:09 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:09.254667 | orchestrator | 2025-05-05 00:48:09 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:09.255472 | orchestrator | 2025-05-05 00:48:09 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:09.257693 | orchestrator | 2025-05-05 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:12.304483 | orchestrator | 2025-05-05 00:48:12 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:12.305308 | orchestrator | 2025-05-05 00:48:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:12.305355 | orchestrator | 2025-05-05 00:48:12 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:12.307193 | orchestrator | 2025-05-05 00:48:12 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:12.308199 | orchestrator | 2025-05-05 00:48:12 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:15.331923 | orchestrator | 2025-05-05 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:15.332046 | orchestrator | 2025-05-05 00:48:15 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:15.334122 | orchestrator | 2025-05-05 00:48:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:15.336782 | orchestrator | 2025-05-05 00:48:15 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:15.338482 | orchestrator | 2025-05-05 00:48:15 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:15.340576 | orchestrator | 2025-05-05 00:48:15 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:15.341345 | orchestrator | 2025-05-05 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:18.372503 | orchestrator | 2025-05-05 00:48:18 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:18.373017 | orchestrator | 2025-05-05 00:48:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:18.373999 | orchestrator | 2025-05-05 00:48:18 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:18.376637 | orchestrator | 2025-05-05 00:48:18 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:18.377931 | orchestrator | 2025-05-05 00:48:18 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:18.378011 | orchestrator | 2025-05-05 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:21.409004 | orchestrator | 2025-05-05 00:48:21 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:21.409418 | orchestrator | 2025-05-05 00:48:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:21.409789 | orchestrator | 2025-05-05 00:48:21 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:21.410425 | orchestrator | 2025-05-05 00:48:21 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:21.411055 | orchestrator | 2025-05-05 00:48:21 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:24.443227 | orchestrator | 2025-05-05 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:24.443436 | orchestrator | 2025-05-05 00:48:24 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:24.443723 | orchestrator | 2025-05-05 00:48:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:24.444226 | orchestrator | 2025-05-05 00:48:24 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:24.444990 | orchestrator | 2025-05-05 00:48:24 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:24.445891 | orchestrator | 2025-05-05 00:48:24 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:27.477268 | orchestrator | 2025-05-05 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:27.477510 | orchestrator | 2025-05-05 00:48:27 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:27.477946 | orchestrator | 2025-05-05 00:48:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:27.478698 | orchestrator | 2025-05-05 00:48:27 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:27.479409 | orchestrator | 2025-05-05 00:48:27 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:27.480081 | orchestrator | 2025-05-05 00:48:27 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:30.511642 | orchestrator | 2025-05-05 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:30.511786 | orchestrator | 2025-05-05 00:48:30 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:30.512114 | orchestrator | 2025-05-05 00:48:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:30.512776 | orchestrator | 2025-05-05 00:48:30 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:30.513487 | orchestrator | 2025-05-05 00:48:30 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:30.515724 | orchestrator | 2025-05-05 00:48:30 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state STARTED 2025-05-05 00:48:33.542737 | orchestrator | 2025-05-05 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:33.542857 | orchestrator | 2025-05-05 00:48:33 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:33.542991 | orchestrator | 2025-05-05 00:48:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:33.545719 | orchestrator | 2025-05-05 00:48:33 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:33.546320 | orchestrator | 2025-05-05 00:48:33 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:33.546995 | orchestrator | 2025-05-05 00:48:33 | INFO  | Task 80130379-f6f3-4bde-8a8d-4ba92ea2606e is in state SUCCESS 2025-05-05 00:48:33.548182 | orchestrator | 2025-05-05 00:48:33.548219 | orchestrator | 2025-05-05 00:48:33.548233 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-05 00:48:33.548248 | orchestrator | 2025-05-05 00:48:33.548262 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-05 00:48:33.548276 | orchestrator | Monday 05 May 2025 00:46:18 +0000 (0:00:00.099) 0:00:00.099 ************ 2025-05-05 00:48:33.548290 | orchestrator | ok: [localhost] => { 2025-05-05 00:48:33.548335 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-05 00:48:33.548350 | orchestrator | } 2025-05-05 00:48:33.548365 | orchestrator | 2025-05-05 00:48:33.548379 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-05 00:48:33.548393 | orchestrator | Monday 05 May 2025 00:46:18 +0000 (0:00:00.043) 0:00:00.142 ************ 2025-05-05 00:48:33.548407 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-05 00:48:33.548422 | orchestrator | ...ignoring 2025-05-05 00:48:33.548436 | orchestrator | 2025-05-05 00:48:33.548451 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-05 00:48:33.548465 | orchestrator | Monday 05 May 2025 00:46:21 +0000 (0:00:02.930) 0:00:03.072 ************ 2025-05-05 00:48:33.548479 | orchestrator | skipping: [localhost] 2025-05-05 00:48:33.548493 | orchestrator | 2025-05-05 00:48:33.548508 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-05 00:48:33.548522 | orchestrator | Monday 05 May 2025 00:46:21 +0000 (0:00:00.082) 0:00:03.154 ************ 2025-05-05 00:48:33.548562 | orchestrator | ok: [localhost] 2025-05-05 00:48:33.548576 | orchestrator | 2025-05-05 00:48:33.548590 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:48:33.548605 | orchestrator | 2025-05-05 00:48:33.548619 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:48:33.548633 | orchestrator | Monday 05 May 2025 00:46:21 +0000 (0:00:00.241) 0:00:03.396 ************ 2025-05-05 00:48:33.548650 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:48:33.548675 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:48:33.548699 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:48:33.548716 | orchestrator | 2025-05-05 00:48:33.548730 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:48:33.548744 | orchestrator | Monday 05 May 2025 00:46:22 +0000 (0:00:00.921) 0:00:04.317 ************ 2025-05-05 00:48:33.548816 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-05 00:48:33.548834 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-05 00:48:33.548849 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-05 00:48:33.548863 | orchestrator | 2025-05-05 00:48:33.548877 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-05 00:48:33.548891 | orchestrator | 2025-05-05 00:48:33.548905 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-05 00:48:33.548919 | orchestrator | Monday 05 May 2025 00:46:23 +0000 (0:00:00.582) 0:00:04.900 ************ 2025-05-05 00:48:33.548934 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:48:33.548948 | orchestrator | 2025-05-05 00:48:33.548962 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-05 00:48:33.548976 | orchestrator | Monday 05 May 2025 00:46:24 +0000 (0:00:01.269) 0:00:06.169 ************ 2025-05-05 00:48:33.548990 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:48:33.549005 | orchestrator | 2025-05-05 00:48:33.549018 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-05 00:48:33.549032 | orchestrator | Monday 05 May 2025 00:46:26 +0000 (0:00:01.732) 0:00:07.902 ************ 2025-05-05 00:48:33.549046 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.549061 | orchestrator | 2025-05-05 00:48:33.549076 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-05 00:48:33.549103 | orchestrator | Monday 05 May 2025 00:46:27 +0000 (0:00:01.261) 0:00:09.163 ************ 2025-05-05 00:48:33.549117 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.549132 | orchestrator | 2025-05-05 00:48:33.549146 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-05 00:48:33.549160 | orchestrator | Monday 05 May 2025 00:46:29 +0000 (0:00:01.358) 0:00:10.521 ************ 2025-05-05 00:48:33.549174 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.549188 | orchestrator | 2025-05-05 00:48:33.549202 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-05 00:48:33.549216 | orchestrator | Monday 05 May 2025 00:46:29 +0000 (0:00:00.370) 0:00:10.892 ************ 2025-05-05 00:48:33.549230 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.549259 | orchestrator | 2025-05-05 00:48:33.549273 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-05 00:48:33.549287 | orchestrator | Monday 05 May 2025 00:46:29 +0000 (0:00:00.321) 0:00:11.213 ************ 2025-05-05 00:48:33.549361 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:48:33.549378 | orchestrator | 2025-05-05 00:48:33.549393 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-05 00:48:33.549407 | orchestrator | Monday 05 May 2025 00:46:30 +0000 (0:00:00.849) 0:00:12.062 ************ 2025-05-05 00:48:33.549421 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:48:33.549435 | orchestrator | 2025-05-05 00:48:33.549449 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-05 00:48:33.549484 | orchestrator | Monday 05 May 2025 00:46:31 +0000 (0:00:00.775) 0:00:12.838 ************ 2025-05-05 00:48:33.549499 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.549513 | orchestrator | 2025-05-05 00:48:33.549527 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-05 00:48:33.549542 | orchestrator | Monday 05 May 2025 00:46:31 +0000 (0:00:00.367) 0:00:13.206 ************ 2025-05-05 00:48:33.549556 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.549570 | orchestrator | 2025-05-05 00:48:33.549595 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-05 00:48:33.549609 | orchestrator | Monday 05 May 2025 00:46:32 +0000 (0:00:00.669) 0:00:13.875 ************ 2025-05-05 00:48:33.549651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.549671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.549684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.549697 | orchestrator | 2025-05-05 00:48:33.549716 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-05 00:48:33.549729 | orchestrator | Monday 05 May 2025 00:46:33 +0000 (0:00:01.206) 0:00:15.081 ************ 2025-05-05 00:48:33.549751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.549775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.549789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.549802 | orchestrator | 2025-05-05 00:48:33.549815 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-05 00:48:33.549827 | orchestrator | Monday 05 May 2025 00:46:35 +0000 (0:00:01.436) 0:00:16.518 ************ 2025-05-05 00:48:33.549840 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-05 00:48:33.549852 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-05 00:48:33.549865 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-05 00:48:33.549883 | orchestrator | 2025-05-05 00:48:33.549899 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-05 00:48:33.549921 | orchestrator | Monday 05 May 2025 00:46:36 +0000 (0:00:01.537) 0:00:18.055 ************ 2025-05-05 00:48:33.549943 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-05 00:48:33.549964 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-05 00:48:33.549977 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-05 00:48:33.549990 | orchestrator | 2025-05-05 00:48:33.550002 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-05 00:48:33.550058 | orchestrator | Monday 05 May 2025 00:46:38 +0000 (0:00:01.706) 0:00:19.765 ************ 2025-05-05 00:48:33.550074 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-05 00:48:33.550087 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-05 00:48:33.550099 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-05 00:48:33.550112 | orchestrator | 2025-05-05 00:48:33.550147 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-05 00:48:33.550160 | orchestrator | Monday 05 May 2025 00:46:40 +0000 (0:00:01.761) 0:00:21.527 ************ 2025-05-05 00:48:33.550173 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-05 00:48:33.550185 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-05 00:48:33.550197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-05 00:48:33.550209 | orchestrator | 2025-05-05 00:48:33.550229 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-05 00:48:33.550255 | orchestrator | Monday 05 May 2025 00:46:43 +0000 (0:00:02.993) 0:00:24.520 ************ 2025-05-05 00:48:33.550269 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-05 00:48:33.550282 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-05 00:48:33.550294 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-05 00:48:33.550340 | orchestrator | 2025-05-05 00:48:33.550359 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-05 00:48:33.550378 | orchestrator | Monday 05 May 2025 00:46:44 +0000 (0:00:01.580) 0:00:26.101 ************ 2025-05-05 00:48:33.550391 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-05 00:48:33.550404 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-05 00:48:33.550416 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-05 00:48:33.550428 | orchestrator | 2025-05-05 00:48:33.550441 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-05 00:48:33.550453 | orchestrator | Monday 05 May 2025 00:46:46 +0000 (0:00:02.138) 0:00:28.240 ************ 2025-05-05 00:48:33.550465 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.550478 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:48:33.550490 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:48:33.550503 | orchestrator | 2025-05-05 00:48:33.550515 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-05 00:48:33.550528 | orchestrator | Monday 05 May 2025 00:46:47 +0000 (0:00:00.588) 0:00:28.828 ************ 2025-05-05 00:48:33.550541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.550581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.550604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:48:33.550618 | orchestrator | 2025-05-05 00:48:33.550631 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-05 00:48:33.550644 | orchestrator | Monday 05 May 2025 00:46:48 +0000 (0:00:01.289) 0:00:30.118 ************ 2025-05-05 00:48:33.550656 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:48:33.550668 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:48:33.550681 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:48:33.550693 | orchestrator | 2025-05-05 00:48:33.550706 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-05 00:48:33.550718 | orchestrator | Monday 05 May 2025 00:46:49 +0000 (0:00:00.992) 0:00:31.110 ************ 2025-05-05 00:48:33.550730 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:48:33.550742 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:48:33.550755 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:48:33.550767 | orchestrator | 2025-05-05 00:48:33.550832 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-05 00:48:33.550853 | orchestrator | Monday 05 May 2025 00:46:55 +0000 (0:00:05.655) 0:00:36.766 ************ 2025-05-05 00:48:33.550866 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:48:33.550878 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:48:33.550891 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:48:33.550918 | orchestrator | 2025-05-05 00:48:33.550932 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-05 00:48:33.550944 | orchestrator | 2025-05-05 00:48:33.550957 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-05 00:48:33.550969 | orchestrator | Monday 05 May 2025 00:46:55 +0000 (0:00:00.348) 0:00:37.115 ************ 2025-05-05 00:48:33.550982 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:48:33.550994 | orchestrator | 2025-05-05 00:48:33.551006 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-05 00:48:33.551019 | orchestrator | Monday 05 May 2025 00:46:56 +0000 (0:00:00.757) 0:00:37.872 ************ 2025-05-05 00:48:33.551031 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:48:33.551043 | orchestrator | 2025-05-05 00:48:33.551056 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-05 00:48:33.551068 | orchestrator | Monday 05 May 2025 00:46:56 +0000 (0:00:00.237) 0:00:38.110 ************ 2025-05-05 00:48:33.551080 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:48:33.551093 | orchestrator | 2025-05-05 00:48:33.551105 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-05 00:48:33.551117 | orchestrator | Monday 05 May 2025 00:47:03 +0000 (0:00:06.758) 0:00:44.868 ************ 2025-05-05 00:48:33.551130 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:48:33.551142 | orchestrator | 2025-05-05 00:48:33.551154 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-05 00:48:33.551166 | orchestrator | 2025-05-05 00:48:33.551179 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-05 00:48:33.551191 | orchestrator | Monday 05 May 2025 00:47:52 +0000 (0:00:49.438) 0:01:34.307 ************ 2025-05-05 00:48:33.551204 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:48:33.551216 | orchestrator | 2025-05-05 00:48:33.551229 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-05 00:48:33.551241 | orchestrator | Monday 05 May 2025 00:47:53 +0000 (0:00:00.708) 0:01:35.016 ************ 2025-05-05 00:48:33.551253 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:48:33.551336 | orchestrator | 2025-05-05 00:48:33.551354 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-05 00:48:33.551367 | orchestrator | Monday 05 May 2025 00:47:53 +0000 (0:00:00.354) 0:01:35.370 ************ 2025-05-05 00:48:33.551380 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:48:33.551392 | orchestrator | 2025-05-05 00:48:33.551405 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-05 00:48:33.551417 | orchestrator | Monday 05 May 2025 00:48:00 +0000 (0:00:06.830) 0:01:42.200 ************ 2025-05-05 00:48:33.551430 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:48:33.551442 | orchestrator | 2025-05-05 00:48:33.551454 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-05 00:48:33.551466 | orchestrator | 2025-05-05 00:48:33.551479 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-05 00:48:33.551491 | orchestrator | Monday 05 May 2025 00:48:11 +0000 (0:00:10.913) 0:01:53.113 ************ 2025-05-05 00:48:33.551504 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:48:33.551516 | orchestrator | 2025-05-05 00:48:33.551533 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-05 00:48:33.551546 | orchestrator | Monday 05 May 2025 00:48:12 +0000 (0:00:00.559) 0:01:53.673 ************ 2025-05-05 00:48:33.551558 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:48:33.551571 | orchestrator | 2025-05-05 00:48:33.551584 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-05 00:48:33.551610 | orchestrator | Monday 05 May 2025 00:48:12 +0000 (0:00:00.261) 0:01:53.935 ************ 2025-05-05 00:48:36.581888 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:48:36.582014 | orchestrator | 2025-05-05 00:48:36.582740 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-05 00:48:36.582768 | orchestrator | Monday 05 May 2025 00:48:19 +0000 (0:00:06.721) 0:02:00.656 ************ 2025-05-05 00:48:36.582784 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:48:36.582817 | orchestrator | 2025-05-05 00:48:36.582832 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-05 00:48:36.582848 | orchestrator | 2025-05-05 00:48:36.582862 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-05 00:48:36.582877 | orchestrator | Monday 05 May 2025 00:48:28 +0000 (0:00:09.414) 0:02:10.071 ************ 2025-05-05 00:48:36.582891 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:48:36.582906 | orchestrator | 2025-05-05 00:48:36.582920 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-05 00:48:36.582935 | orchestrator | Monday 05 May 2025 00:48:29 +0000 (0:00:00.710) 0:02:10.781 ************ 2025-05-05 00:48:36.582965 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-05 00:48:36.582980 | orchestrator | enable_outward_rabbitmq_True 2025-05-05 00:48:36.582995 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-05 00:48:36.583010 | orchestrator | outward_rabbitmq_restart 2025-05-05 00:48:36.583025 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:48:36.583040 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:48:36.583055 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:48:36.583069 | orchestrator | 2025-05-05 00:48:36.583084 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-05 00:48:36.583099 | orchestrator | skipping: no hosts matched 2025-05-05 00:48:36.583114 | orchestrator | 2025-05-05 00:48:36.583128 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-05 00:48:36.583143 | orchestrator | skipping: no hosts matched 2025-05-05 00:48:36.583157 | orchestrator | 2025-05-05 00:48:36.583172 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-05 00:48:36.583186 | orchestrator | skipping: no hosts matched 2025-05-05 00:48:36.583201 | orchestrator | 2025-05-05 00:48:36.583215 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:48:36.583230 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-05 00:48:36.583246 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-05 00:48:36.583260 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:48:36.583274 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 00:48:36.583289 | orchestrator | 2025-05-05 00:48:36.583321 | orchestrator | 2025-05-05 00:48:36.583337 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:48:36.583352 | orchestrator | Monday 05 May 2025 00:48:32 +0000 (0:00:02.915) 0:02:13.697 ************ 2025-05-05 00:48:36.583366 | orchestrator | =============================================================================== 2025-05-05 00:48:36.583381 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 69.77s 2025-05-05 00:48:36.583395 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 20.31s 2025-05-05 00:48:36.583410 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.66s 2025-05-05 00:48:36.583424 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.99s 2025-05-05 00:48:36.583467 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.93s 2025-05-05 00:48:36.583482 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.92s 2025-05-05 00:48:36.583497 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.14s 2025-05-05 00:48:36.583511 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.03s 2025-05-05 00:48:36.583525 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.76s 2025-05-05 00:48:36.583540 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.73s 2025-05-05 00:48:36.583554 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.71s 2025-05-05 00:48:36.583568 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.58s 2025-05-05 00:48:36.583582 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.54s 2025-05-05 00:48:36.583602 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.44s 2025-05-05 00:48:36.583617 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.36s 2025-05-05 00:48:36.583631 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.29s 2025-05-05 00:48:36.583646 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.27s 2025-05-05 00:48:36.583660 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.26s 2025-05-05 00:48:36.583674 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.21s 2025-05-05 00:48:36.583689 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.99s 2025-05-05 00:48:36.583703 | orchestrator | 2025-05-05 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:36.583736 | orchestrator | 2025-05-05 00:48:36 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:36.585869 | orchestrator | 2025-05-05 00:48:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:36.585979 | orchestrator | 2025-05-05 00:48:36 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:36.586060 | orchestrator | 2025-05-05 00:48:36 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:39.618242 | orchestrator | 2025-05-05 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:39.618479 | orchestrator | 2025-05-05 00:48:39 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:39.619090 | orchestrator | 2025-05-05 00:48:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:39.620814 | orchestrator | 2025-05-05 00:48:39 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:39.622122 | orchestrator | 2025-05-05 00:48:39 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:39.622522 | orchestrator | 2025-05-05 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:42.669569 | orchestrator | 2025-05-05 00:48:42 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:45.713223 | orchestrator | 2025-05-05 00:48:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:45.713449 | orchestrator | 2025-05-05 00:48:42 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:45.713468 | orchestrator | 2025-05-05 00:48:42 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:45.713481 | orchestrator | 2025-05-05 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:45.713514 | orchestrator | 2025-05-05 00:48:45 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:45.715133 | orchestrator | 2025-05-05 00:48:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:45.717002 | orchestrator | 2025-05-05 00:48:45 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:45.718881 | orchestrator | 2025-05-05 00:48:45 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:45.719033 | orchestrator | 2025-05-05 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:48.759851 | orchestrator | 2025-05-05 00:48:48 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:48.761512 | orchestrator | 2025-05-05 00:48:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:48.762568 | orchestrator | 2025-05-05 00:48:48 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:48.763969 | orchestrator | 2025-05-05 00:48:48 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:48.764212 | orchestrator | 2025-05-05 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:51.812413 | orchestrator | 2025-05-05 00:48:51 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:51.812914 | orchestrator | 2025-05-05 00:48:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:51.816026 | orchestrator | 2025-05-05 00:48:51 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:51.823110 | orchestrator | 2025-05-05 00:48:51 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:54.858178 | orchestrator | 2025-05-05 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:54.858406 | orchestrator | 2025-05-05 00:48:54 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:54.859464 | orchestrator | 2025-05-05 00:48:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:54.861257 | orchestrator | 2025-05-05 00:48:54 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:54.863121 | orchestrator | 2025-05-05 00:48:54 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:48:54.864817 | orchestrator | 2025-05-05 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:48:57.930111 | orchestrator | 2025-05-05 00:48:57 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:48:57.931524 | orchestrator | 2025-05-05 00:48:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:48:57.932869 | orchestrator | 2025-05-05 00:48:57 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:48:57.934543 | orchestrator | 2025-05-05 00:48:57 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:00.987401 | orchestrator | 2025-05-05 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:00.987552 | orchestrator | 2025-05-05 00:49:00 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:00.988773 | orchestrator | 2025-05-05 00:49:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:00.992261 | orchestrator | 2025-05-05 00:49:00 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:00.995220 | orchestrator | 2025-05-05 00:49:00 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:00.995428 | orchestrator | 2025-05-05 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:04.046325 | orchestrator | 2025-05-05 00:49:04 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:04.048313 | orchestrator | 2025-05-05 00:49:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:04.050839 | orchestrator | 2025-05-05 00:49:04 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:04.052474 | orchestrator | 2025-05-05 00:49:04 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:07.094507 | orchestrator | 2025-05-05 00:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:07.094645 | orchestrator | 2025-05-05 00:49:07 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:07.096766 | orchestrator | 2025-05-05 00:49:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:07.096894 | orchestrator | 2025-05-05 00:49:07 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:07.098976 | orchestrator | 2025-05-05 00:49:07 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:07.099276 | orchestrator | 2025-05-05 00:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:10.141733 | orchestrator | 2025-05-05 00:49:10 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:10.142785 | orchestrator | 2025-05-05 00:49:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:10.142836 | orchestrator | 2025-05-05 00:49:10 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:10.143805 | orchestrator | 2025-05-05 00:49:10 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:13.189750 | orchestrator | 2025-05-05 00:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:13.189914 | orchestrator | 2025-05-05 00:49:13 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:13.190490 | orchestrator | 2025-05-05 00:49:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:13.191930 | orchestrator | 2025-05-05 00:49:13 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:13.192947 | orchestrator | 2025-05-05 00:49:13 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:16.249620 | orchestrator | 2025-05-05 00:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:16.249803 | orchestrator | 2025-05-05 00:49:16 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:16.252086 | orchestrator | 2025-05-05 00:49:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:16.254334 | orchestrator | 2025-05-05 00:49:16 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:16.256112 | orchestrator | 2025-05-05 00:49:16 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:16.256420 | orchestrator | 2025-05-05 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:19.307910 | orchestrator | 2025-05-05 00:49:19 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:19.311960 | orchestrator | 2025-05-05 00:49:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:19.313257 | orchestrator | 2025-05-05 00:49:19 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:19.314431 | orchestrator | 2025-05-05 00:49:19 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:19.314601 | orchestrator | 2025-05-05 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:22.358664 | orchestrator | 2025-05-05 00:49:22 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:22.359724 | orchestrator | 2025-05-05 00:49:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:22.360621 | orchestrator | 2025-05-05 00:49:22 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:22.360659 | orchestrator | 2025-05-05 00:49:22 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:25.407738 | orchestrator | 2025-05-05 00:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:25.407896 | orchestrator | 2025-05-05 00:49:25 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:25.408824 | orchestrator | 2025-05-05 00:49:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:25.408871 | orchestrator | 2025-05-05 00:49:25 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:25.411028 | orchestrator | 2025-05-05 00:49:25 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:28.464248 | orchestrator | 2025-05-05 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:28.464452 | orchestrator | 2025-05-05 00:49:28 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:28.464794 | orchestrator | 2025-05-05 00:49:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:28.464825 | orchestrator | 2025-05-05 00:49:28 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:28.464847 | orchestrator | 2025-05-05 00:49:28 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:31.514239 | orchestrator | 2025-05-05 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:31.514437 | orchestrator | 2025-05-05 00:49:31 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:31.515848 | orchestrator | 2025-05-05 00:49:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:31.517694 | orchestrator | 2025-05-05 00:49:31 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:31.519349 | orchestrator | 2025-05-05 00:49:31 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:31.519688 | orchestrator | 2025-05-05 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:34.575586 | orchestrator | 2025-05-05 00:49:34 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:34.575847 | orchestrator | 2025-05-05 00:49:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:34.575875 | orchestrator | 2025-05-05 00:49:34 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:34.575897 | orchestrator | 2025-05-05 00:49:34 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:37.612276 | orchestrator | 2025-05-05 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:37.612484 | orchestrator | 2025-05-05 00:49:37 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:37.612820 | orchestrator | 2025-05-05 00:49:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:37.612859 | orchestrator | 2025-05-05 00:49:37 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:37.613432 | orchestrator | 2025-05-05 00:49:37 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:40.645266 | orchestrator | 2025-05-05 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:40.645495 | orchestrator | 2025-05-05 00:49:40 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:40.646412 | orchestrator | 2025-05-05 00:49:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:40.647279 | orchestrator | 2025-05-05 00:49:40 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state STARTED 2025-05-05 00:49:40.648296 | orchestrator | 2025-05-05 00:49:40 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:43.680157 | orchestrator | 2025-05-05 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:43.680243 | orchestrator | 2025-05-05 00:49:43 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:43.680288 | orchestrator | 2025-05-05 00:49:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:43.681093 | orchestrator | 2025-05-05 00:49:43 | INFO  | Task 9ce77e4f-2c9d-43d4-af80-63eeb8407b3e is in state SUCCESS 2025-05-05 00:49:43.682225 | orchestrator | 2025-05-05 00:49:43.682247 | orchestrator | 2025-05-05 00:49:43.682256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:49:43.682264 | orchestrator | 2025-05-05 00:49:43.682272 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:49:43.682279 | orchestrator | Monday 05 May 2025 00:47:18 +0000 (0:00:00.183) 0:00:00.183 ************ 2025-05-05 00:49:43.682294 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.682304 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.682312 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.682319 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:49:43.682327 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:49:43.682334 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:49:43.682342 | orchestrator | 2025-05-05 00:49:43.682349 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:49:43.682357 | orchestrator | Monday 05 May 2025 00:47:19 +0000 (0:00:00.585) 0:00:00.768 ************ 2025-05-05 00:49:43.682365 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-05 00:49:43.682372 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-05 00:49:43.682396 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-05 00:49:43.682404 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-05 00:49:43.682412 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-05 00:49:43.682419 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-05 00:49:43.682427 | orchestrator | 2025-05-05 00:49:43.682434 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-05 00:49:43.682457 | orchestrator | 2025-05-05 00:49:43.682466 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-05 00:49:43.682474 | orchestrator | Monday 05 May 2025 00:47:20 +0000 (0:00:01.075) 0:00:01.844 ************ 2025-05-05 00:49:43.682482 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:49:43.682509 | orchestrator | 2025-05-05 00:49:43.682517 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-05 00:49:43.682539 | orchestrator | Monday 05 May 2025 00:47:21 +0000 (0:00:01.575) 0:00:03.420 ************ 2025-05-05 00:49:43.682547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682641 | orchestrator | 2025-05-05 00:49:43.682649 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-05 00:49:43.682656 | orchestrator | Monday 05 May 2025 00:47:23 +0000 (0:00:01.036) 0:00:04.456 ************ 2025-05-05 00:49:43.682672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682744 | orchestrator | 2025-05-05 00:49:43.682751 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-05 00:49:43.682759 | orchestrator | Monday 05 May 2025 00:47:24 +0000 (0:00:01.912) 0:00:06.369 ************ 2025-05-05 00:49:43.682767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682816 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682837 | orchestrator | 2025-05-05 00:49:43.682846 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-05 00:49:43.682854 | orchestrator | Monday 05 May 2025 00:47:26 +0000 (0:00:01.135) 0:00:07.504 ************ 2025-05-05 00:49:43.682863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682922 | orchestrator | 2025-05-05 00:49:43.682930 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-05 00:49:43.682942 | orchestrator | Monday 05 May 2025 00:47:27 +0000 (0:00:01.450) 0:00:08.955 ************ 2025-05-05 00:49:43.682951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.682994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.683002 | orchestrator | 2025-05-05 00:49:43.683011 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-05 00:49:43.683019 | orchestrator | Monday 05 May 2025 00:47:28 +0000 (0:00:01.348) 0:00:10.303 ************ 2025-05-05 00:49:43.683028 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.683037 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.683045 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.683054 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:49:43.683063 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:49:43.683071 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:49:43.683079 | orchestrator | 2025-05-05 00:49:43.683087 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-05 00:49:43.683096 | orchestrator | Monday 05 May 2025 00:47:31 +0000 (0:00:02.738) 0:00:13.042 ************ 2025-05-05 00:49:43.683105 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-05 00:49:43.683113 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-05 00:49:43.683125 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-05 00:49:43.683137 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-05 00:49:43.683145 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-05 00:49:43.683153 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-05 00:49:43.683162 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-05 00:49:43.683171 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-05 00:49:43.683180 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-05 00:49:43.683191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-05 00:49:43.683199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-05 00:49:43.683206 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-05 00:49:43.683214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-05 00:49:43.683222 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-05 00:49:43.683230 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-05 00:49:43.683238 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-05 00:49:43.683245 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-05 00:49:43.683253 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-05 00:49:43.683261 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-05 00:49:43.683285 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-05 00:49:43.683293 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-05 00:49:43.683303 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-05 00:49:43.683311 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-05 00:49:43.683319 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-05 00:49:43.683326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-05 00:49:43.683334 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-05 00:49:43.683341 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-05 00:49:43.683349 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-05 00:49:43.683356 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-05 00:49:43.683364 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-05 00:49:43.683371 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-05 00:49:43.683396 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-05 00:49:43.683410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-05 00:49:43.683418 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-05 00:49:43.683426 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-05 00:49:43.683434 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-05 00:49:43.683441 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-05 00:49:43.683449 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-05 00:49:43.683457 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-05 00:49:43.683464 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-05 00:49:43.683476 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-05 00:49:43.683483 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-05 00:49:43.683491 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-05 00:49:43.683499 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-05 00:49:43.683507 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-05 00:49:43.683514 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-05 00:49:43.683522 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-05 00:49:43.683529 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-05 00:49:43.683537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-05 00:49:43.683545 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-05 00:49:43.683552 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-05 00:49:43.683560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-05 00:49:43.683567 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-05 00:49:43.683575 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-05 00:49:43.683583 | orchestrator | 2025-05-05 00:49:43.683590 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-05 00:49:43.683598 | orchestrator | Monday 05 May 2025 00:47:50 +0000 (0:00:18.786) 0:00:31.828 ************ 2025-05-05 00:49:43.683605 | orchestrator | 2025-05-05 00:49:43.683613 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-05 00:49:43.683621 | orchestrator | Monday 05 May 2025 00:47:50 +0000 (0:00:00.056) 0:00:31.884 ************ 2025-05-05 00:49:43.683628 | orchestrator | 2025-05-05 00:49:43.683636 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-05 00:49:43.683643 | orchestrator | Monday 05 May 2025 00:47:50 +0000 (0:00:00.196) 0:00:32.081 ************ 2025-05-05 00:49:43.683654 | orchestrator | 2025-05-05 00:49:43.683662 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-05 00:49:43.683670 | orchestrator | Monday 05 May 2025 00:47:50 +0000 (0:00:00.067) 0:00:32.149 ************ 2025-05-05 00:49:43.683677 | orchestrator | 2025-05-05 00:49:43.683685 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-05 00:49:43.683692 | orchestrator | Monday 05 May 2025 00:47:50 +0000 (0:00:00.064) 0:00:32.213 ************ 2025-05-05 00:49:43.683700 | orchestrator | 2025-05-05 00:49:43.683707 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-05 00:49:43.683715 | orchestrator | Monday 05 May 2025 00:47:50 +0000 (0:00:00.051) 0:00:32.265 ************ 2025-05-05 00:49:43.683722 | orchestrator | 2025-05-05 00:49:43.683730 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-05 00:49:43.683737 | orchestrator | Monday 05 May 2025 00:47:51 +0000 (0:00:00.360) 0:00:32.626 ************ 2025-05-05 00:49:43.683745 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:49:43.683752 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.683760 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:49:43.683768 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.683775 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.683782 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:49:43.683790 | orchestrator | 2025-05-05 00:49:43.683797 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-05 00:49:43.683805 | orchestrator | Monday 05 May 2025 00:47:53 +0000 (0:00:02.316) 0:00:34.943 ************ 2025-05-05 00:49:43.683812 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.683820 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.683828 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.683835 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:49:43.683843 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:49:43.683850 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:49:43.683857 | orchestrator | 2025-05-05 00:49:43.683865 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-05 00:49:43.683873 | orchestrator | 2025-05-05 00:49:43.683880 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-05 00:49:43.683888 | orchestrator | Monday 05 May 2025 00:48:16 +0000 (0:00:23.289) 0:00:58.232 ************ 2025-05-05 00:49:43.683896 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:49:43.683903 | orchestrator | 2025-05-05 00:49:43.683911 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-05 00:49:43.683918 | orchestrator | Monday 05 May 2025 00:48:17 +0000 (0:00:00.455) 0:00:58.687 ************ 2025-05-05 00:49:43.683926 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:49:43.683933 | orchestrator | 2025-05-05 00:49:43.683944 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-05 00:49:43.683954 | orchestrator | Monday 05 May 2025 00:48:17 +0000 (0:00:00.654) 0:00:59.342 ************ 2025-05-05 00:49:43.683962 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.683970 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.683977 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.683985 | orchestrator | 2025-05-05 00:49:43.683992 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-05 00:49:43.684000 | orchestrator | Monday 05 May 2025 00:48:18 +0000 (0:00:00.800) 0:01:00.142 ************ 2025-05-05 00:49:43.684007 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.684015 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.684022 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.684030 | orchestrator | 2025-05-05 00:49:43.684037 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-05 00:49:43.684045 | orchestrator | Monday 05 May 2025 00:48:19 +0000 (0:00:00.294) 0:01:00.437 ************ 2025-05-05 00:49:43.684056 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.684063 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.684071 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.684078 | orchestrator | 2025-05-05 00:49:43.684086 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-05 00:49:43.684093 | orchestrator | Monday 05 May 2025 00:48:19 +0000 (0:00:00.401) 0:01:00.839 ************ 2025-05-05 00:49:43.684101 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.684108 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.684116 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.684123 | orchestrator | 2025-05-05 00:49:43.684131 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-05 00:49:43.684138 | orchestrator | Monday 05 May 2025 00:48:19 +0000 (0:00:00.403) 0:01:01.242 ************ 2025-05-05 00:49:43.684145 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.684153 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.684161 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.684168 | orchestrator | 2025-05-05 00:49:43.684175 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-05 00:49:43.684183 | orchestrator | Monday 05 May 2025 00:48:20 +0000 (0:00:00.419) 0:01:01.662 ************ 2025-05-05 00:49:43.684190 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684198 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684206 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684213 | orchestrator | 2025-05-05 00:49:43.684221 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-05 00:49:43.684228 | orchestrator | Monday 05 May 2025 00:48:20 +0000 (0:00:00.406) 0:01:02.069 ************ 2025-05-05 00:49:43.684236 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684250 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684257 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684265 | orchestrator | 2025-05-05 00:49:43.684272 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-05 00:49:43.684280 | orchestrator | Monday 05 May 2025 00:48:21 +0000 (0:00:00.631) 0:01:02.700 ************ 2025-05-05 00:49:43.684287 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684295 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684302 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684310 | orchestrator | 2025-05-05 00:49:43.684317 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-05 00:49:43.684325 | orchestrator | Monday 05 May 2025 00:48:21 +0000 (0:00:00.386) 0:01:03.087 ************ 2025-05-05 00:49:43.684332 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684340 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684347 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684362 | orchestrator | 2025-05-05 00:49:43.684370 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-05 00:49:43.684390 | orchestrator | Monday 05 May 2025 00:48:21 +0000 (0:00:00.289) 0:01:03.376 ************ 2025-05-05 00:49:43.684398 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684406 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684413 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684421 | orchestrator | 2025-05-05 00:49:43.684428 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-05 00:49:43.684436 | orchestrator | Monday 05 May 2025 00:48:22 +0000 (0:00:00.396) 0:01:03.772 ************ 2025-05-05 00:49:43.684444 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684451 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684459 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684466 | orchestrator | 2025-05-05 00:49:43.684474 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-05 00:49:43.684481 | orchestrator | Monday 05 May 2025 00:48:22 +0000 (0:00:00.343) 0:01:04.116 ************ 2025-05-05 00:49:43.684489 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684500 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684507 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684515 | orchestrator | 2025-05-05 00:49:43.684522 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-05 00:49:43.684530 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:00.309) 0:01:04.425 ************ 2025-05-05 00:49:43.684537 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684545 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684553 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684560 | orchestrator | 2025-05-05 00:49:43.684568 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-05 00:49:43.684575 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:00.246) 0:01:04.671 ************ 2025-05-05 00:49:43.684583 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684590 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684598 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684605 | orchestrator | 2025-05-05 00:49:43.684613 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-05 00:49:43.684620 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:00.303) 0:01:04.974 ************ 2025-05-05 00:49:43.684628 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684636 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684643 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684650 | orchestrator | 2025-05-05 00:49:43.684661 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-05 00:49:43.684828 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:00.340) 0:01:05.315 ************ 2025-05-05 00:49:43.684840 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684848 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684856 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684863 | orchestrator | 2025-05-05 00:49:43.684871 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-05 00:49:43.684882 | orchestrator | Monday 05 May 2025 00:48:24 +0000 (0:00:00.383) 0:01:05.699 ************ 2025-05-05 00:49:43.684890 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.684897 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.684905 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.684912 | orchestrator | 2025-05-05 00:49:43.684919 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-05 00:49:43.684927 | orchestrator | Monday 05 May 2025 00:48:24 +0000 (0:00:00.265) 0:01:05.964 ************ 2025-05-05 00:49:43.684935 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:49:43.684942 | orchestrator | 2025-05-05 00:49:43.684950 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-05 00:49:43.684957 | orchestrator | Monday 05 May 2025 00:48:25 +0000 (0:00:00.644) 0:01:06.608 ************ 2025-05-05 00:49:43.684965 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.684972 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.684979 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.684987 | orchestrator | 2025-05-05 00:49:43.684994 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-05 00:49:43.685002 | orchestrator | Monday 05 May 2025 00:48:25 +0000 (0:00:00.469) 0:01:07.078 ************ 2025-05-05 00:49:43.685009 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.685016 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.685024 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.685031 | orchestrator | 2025-05-05 00:49:43.685039 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-05 00:49:43.685046 | orchestrator | Monday 05 May 2025 00:48:26 +0000 (0:00:00.443) 0:01:07.522 ************ 2025-05-05 00:49:43.685054 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.685061 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.685069 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.685080 | orchestrator | 2025-05-05 00:49:43.685088 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-05 00:49:43.685096 | orchestrator | Monday 05 May 2025 00:48:26 +0000 (0:00:00.421) 0:01:07.944 ************ 2025-05-05 00:49:43.685103 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.685111 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.685118 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.685125 | orchestrator | 2025-05-05 00:49:43.685133 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-05 00:49:43.685140 | orchestrator | Monday 05 May 2025 00:48:26 +0000 (0:00:00.456) 0:01:08.400 ************ 2025-05-05 00:49:43.685148 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.685155 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.685163 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.685170 | orchestrator | 2025-05-05 00:49:43.685178 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-05 00:49:43.685185 | orchestrator | Monday 05 May 2025 00:48:27 +0000 (0:00:00.313) 0:01:08.713 ************ 2025-05-05 00:49:43.685193 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.685200 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.685208 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.685215 | orchestrator | 2025-05-05 00:49:43.685223 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-05 00:49:43.685230 | orchestrator | Monday 05 May 2025 00:48:27 +0000 (0:00:00.463) 0:01:09.177 ************ 2025-05-05 00:49:43.685238 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.685248 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.685256 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.685263 | orchestrator | 2025-05-05 00:49:43.685270 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-05 00:49:43.685278 | orchestrator | Monday 05 May 2025 00:48:28 +0000 (0:00:00.437) 0:01:09.614 ************ 2025-05-05 00:49:43.685285 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.685293 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.685300 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.685307 | orchestrator | 2025-05-05 00:49:43.685315 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-05 00:49:43.685323 | orchestrator | Monday 05 May 2025 00:48:28 +0000 (0:00:00.536) 0:01:10.150 ************ 2025-05-05 00:49:43.685330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685441 | orchestrator | 2025-05-05 00:49:43.685449 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-05 00:49:43.685457 | orchestrator | Monday 05 May 2025 00:48:30 +0000 (0:00:01.525) 0:01:11.676 ************ 2025-05-05 00:49:43.685466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685578 | orchestrator | 2025-05-05 00:49:43.685586 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-05 00:49:43.685595 | orchestrator | Monday 05 May 2025 00:48:34 +0000 (0:00:04.282) 0:01:15.958 ************ 2025-05-05 00:49:43.685603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.685696 | orchestrator | 2025-05-05 00:49:43.685705 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-05 00:49:43.685713 | orchestrator | Monday 05 May 2025 00:48:37 +0000 (0:00:02.499) 0:01:18.458 ************ 2025-05-05 00:49:43.685721 | orchestrator | 2025-05-05 00:49:43.685729 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-05 00:49:43.685738 | orchestrator | Monday 05 May 2025 00:48:37 +0000 (0:00:00.052) 0:01:18.510 ************ 2025-05-05 00:49:43.685746 | orchestrator | 2025-05-05 00:49:43.685754 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-05 00:49:43.685763 | orchestrator | Monday 05 May 2025 00:48:37 +0000 (0:00:00.049) 0:01:18.559 ************ 2025-05-05 00:49:43.685771 | orchestrator | 2025-05-05 00:49:43.685779 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-05 00:49:43.685791 | orchestrator | Monday 05 May 2025 00:48:37 +0000 (0:00:00.151) 0:01:18.711 ************ 2025-05-05 00:49:43.685799 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.685808 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.685815 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.685823 | orchestrator | 2025-05-05 00:49:43.685830 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-05 00:49:43.685837 | orchestrator | Monday 05 May 2025 00:48:44 +0000 (0:00:07.399) 0:01:26.110 ************ 2025-05-05 00:49:43.685845 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.685855 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.685863 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.685871 | orchestrator | 2025-05-05 00:49:43.685878 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-05 00:49:43.685885 | orchestrator | Monday 05 May 2025 00:48:52 +0000 (0:00:07.868) 0:01:33.978 ************ 2025-05-05 00:49:43.685893 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.685900 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.685908 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.685915 | orchestrator | 2025-05-05 00:49:43.685923 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-05 00:49:43.685930 | orchestrator | Monday 05 May 2025 00:49:00 +0000 (0:00:07.793) 0:01:41.772 ************ 2025-05-05 00:49:43.685937 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.685945 | orchestrator | 2025-05-05 00:49:43.685952 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-05 00:49:43.685959 | orchestrator | Monday 05 May 2025 00:49:00 +0000 (0:00:00.141) 0:01:41.914 ************ 2025-05-05 00:49:43.685967 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.685974 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.685982 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.685989 | orchestrator | 2025-05-05 00:49:43.685999 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-05 00:49:43.686007 | orchestrator | Monday 05 May 2025 00:49:01 +0000 (0:00:01.106) 0:01:43.021 ************ 2025-05-05 00:49:43.686044 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.686054 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.686062 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.686069 | orchestrator | 2025-05-05 00:49:43.686077 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-05 00:49:43.686084 | orchestrator | Monday 05 May 2025 00:49:02 +0000 (0:00:00.588) 0:01:43.610 ************ 2025-05-05 00:49:43.686092 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.686099 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.686106 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.686114 | orchestrator | 2025-05-05 00:49:43.686121 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-05 00:49:43.686129 | orchestrator | Monday 05 May 2025 00:49:03 +0000 (0:00:00.889) 0:01:44.499 ************ 2025-05-05 00:49:43.686136 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.686144 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.686151 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.686159 | orchestrator | 2025-05-05 00:49:43.686166 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-05 00:49:43.686174 | orchestrator | Monday 05 May 2025 00:49:03 +0000 (0:00:00.633) 0:01:45.133 ************ 2025-05-05 00:49:43.686181 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.686189 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.686196 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.686204 | orchestrator | 2025-05-05 00:49:43.686211 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-05 00:49:43.686219 | orchestrator | Monday 05 May 2025 00:49:04 +0000 (0:00:01.007) 0:01:46.140 ************ 2025-05-05 00:49:43.686226 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.686233 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.686241 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.686248 | orchestrator | 2025-05-05 00:49:43.686255 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-05 00:49:43.686263 | orchestrator | Monday 05 May 2025 00:49:05 +0000 (0:00:00.738) 0:01:46.878 ************ 2025-05-05 00:49:43.686270 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.686278 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.686285 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.686293 | orchestrator | 2025-05-05 00:49:43.686300 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-05 00:49:43.686312 | orchestrator | Monday 05 May 2025 00:49:05 +0000 (0:00:00.468) 0:01:47.347 ************ 2025-05-05 00:49:43.686319 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686328 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686343 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686351 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686359 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686378 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686396 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686404 | orchestrator | 2025-05-05 00:49:43.686411 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-05 00:49:43.686425 | orchestrator | Monday 05 May 2025 00:49:07 +0000 (0:00:01.721) 0:01:49.068 ************ 2025-05-05 00:49:43.686433 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686444 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686452 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686462 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686488 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686514 | orchestrator | 2025-05-05 00:49:43.686522 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-05 00:49:43.686530 | orchestrator | Monday 05 May 2025 00:49:11 +0000 (0:00:04.315) 0:01:53.383 ************ 2025-05-05 00:49:43.686537 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686545 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686553 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686560 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686579 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686587 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686597 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686608 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686616 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 00:49:43.686627 | orchestrator | 2025-05-05 00:49:43.686635 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-05 00:49:43.686643 | orchestrator | Monday 05 May 2025 00:49:14 +0000 (0:00:02.952) 0:01:56.335 ************ 2025-05-05 00:49:43.686650 | orchestrator | 2025-05-05 00:49:43.686657 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-05 00:49:43.686665 | orchestrator | Monday 05 May 2025 00:49:15 +0000 (0:00:00.205) 0:01:56.541 ************ 2025-05-05 00:49:43.686672 | orchestrator | 2025-05-05 00:49:43.686680 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-05 00:49:43.686687 | orchestrator | Monday 05 May 2025 00:49:15 +0000 (0:00:00.065) 0:01:56.607 ************ 2025-05-05 00:49:43.686695 | orchestrator | 2025-05-05 00:49:43.686702 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-05 00:49:43.686710 | orchestrator | Monday 05 May 2025 00:49:15 +0000 (0:00:00.060) 0:01:56.667 ************ 2025-05-05 00:49:43.686717 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.686724 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.686732 | orchestrator | 2025-05-05 00:49:43.686739 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-05 00:49:43.686746 | orchestrator | Monday 05 May 2025 00:49:21 +0000 (0:00:06.580) 0:02:03.247 ************ 2025-05-05 00:49:43.686754 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.686761 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.686769 | orchestrator | 2025-05-05 00:49:43.686776 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-05 00:49:43.686784 | orchestrator | Monday 05 May 2025 00:49:28 +0000 (0:00:06.449) 0:02:09.697 ************ 2025-05-05 00:49:43.686791 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:49:43.686799 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:49:43.686806 | orchestrator | 2025-05-05 00:49:43.686813 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-05 00:49:43.686821 | orchestrator | Monday 05 May 2025 00:49:34 +0000 (0:00:06.374) 0:02:16.072 ************ 2025-05-05 00:49:43.686828 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:49:43.686836 | orchestrator | 2025-05-05 00:49:43.686844 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-05 00:49:43.686851 | orchestrator | Monday 05 May 2025 00:49:34 +0000 (0:00:00.313) 0:02:16.385 ************ 2025-05-05 00:49:43.686858 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.686866 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.686873 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.686880 | orchestrator | 2025-05-05 00:49:43.686888 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-05 00:49:43.686895 | orchestrator | Monday 05 May 2025 00:49:35 +0000 (0:00:00.814) 0:02:17.199 ************ 2025-05-05 00:49:43.686903 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.686910 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.686917 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.686925 | orchestrator | 2025-05-05 00:49:43.686932 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-05 00:49:43.686940 | orchestrator | Monday 05 May 2025 00:49:36 +0000 (0:00:00.703) 0:02:17.903 ************ 2025-05-05 00:49:43.686947 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.686954 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.686962 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.686974 | orchestrator | 2025-05-05 00:49:43.686982 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-05 00:49:43.686990 | orchestrator | Monday 05 May 2025 00:49:37 +0000 (0:00:01.311) 0:02:19.214 ************ 2025-05-05 00:49:43.686997 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:49:43.687006 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:49:43.687013 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:49:43.687021 | orchestrator | 2025-05-05 00:49:43.687032 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-05 00:49:43.687040 | orchestrator | Monday 05 May 2025 00:49:38 +0000 (0:00:00.748) 0:02:19.963 ************ 2025-05-05 00:49:43.687047 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.687055 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.687062 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.687069 | orchestrator | 2025-05-05 00:49:43.687077 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-05 00:49:43.687085 | orchestrator | Monday 05 May 2025 00:49:39 +0000 (0:00:00.740) 0:02:20.704 ************ 2025-05-05 00:49:43.687092 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:49:43.687099 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:49:43.687107 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:49:43.687114 | orchestrator | 2025-05-05 00:49:43.687122 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:49:43.687129 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-05 00:49:43.687137 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-05 00:49:43.687148 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-05 00:49:46.724942 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:49:46.725065 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:49:46.725084 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 00:49:46.725100 | orchestrator | 2025-05-05 00:49:46.725115 | orchestrator | 2025-05-05 00:49:46.725130 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:49:46.725145 | orchestrator | Monday 05 May 2025 00:49:41 +0000 (0:00:01.754) 0:02:22.458 ************ 2025-05-05 00:49:46.725159 | orchestrator | =============================================================================== 2025-05-05 00:49:46.725173 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.29s 2025-05-05 00:49:46.725187 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.79s 2025-05-05 00:49:46.725201 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.32s 2025-05-05 00:49:46.725215 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.17s 2025-05-05 00:49:46.725229 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.98s 2025-05-05 00:49:46.725243 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.32s 2025-05-05 00:49:46.725263 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.28s 2025-05-05 00:49:46.725277 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.95s 2025-05-05 00:49:46.725291 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.74s 2025-05-05 00:49:46.725305 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.50s 2025-05-05 00:49:46.725319 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.32s 2025-05-05 00:49:46.725333 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.91s 2025-05-05 00:49:46.725347 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.75s 2025-05-05 00:49:46.725361 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.72s 2025-05-05 00:49:46.725375 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.58s 2025-05-05 00:49:46.725457 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-05-05 00:49:46.725474 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.45s 2025-05-05 00:49:46.725489 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.35s 2025-05-05 00:49:46.725505 | orchestrator | ovn-db : Get OVN_Southbound cluster leader ------------------------------ 1.31s 2025-05-05 00:49:46.725521 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.14s 2025-05-05 00:49:46.725537 | orchestrator | 2025-05-05 00:49:43 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:46.725553 | orchestrator | 2025-05-05 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:46.725584 | orchestrator | 2025-05-05 00:49:46 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:46.726857 | orchestrator | 2025-05-05 00:49:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:46.728630 | orchestrator | 2025-05-05 00:49:46 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:46.729030 | orchestrator | 2025-05-05 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:49.766746 | orchestrator | 2025-05-05 00:49:49 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:49.767848 | orchestrator | 2025-05-05 00:49:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:52.814898 | orchestrator | 2025-05-05 00:49:49 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:52.815090 | orchestrator | 2025-05-05 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:52.815133 | orchestrator | 2025-05-05 00:49:52 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:52.815485 | orchestrator | 2025-05-05 00:49:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:52.816898 | orchestrator | 2025-05-05 00:49:52 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:55.860363 | orchestrator | 2025-05-05 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:55.860593 | orchestrator | 2025-05-05 00:49:55 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:55.860667 | orchestrator | 2025-05-05 00:49:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:55.860686 | orchestrator | 2025-05-05 00:49:55 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:58.911634 | orchestrator | 2025-05-05 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:49:58.911804 | orchestrator | 2025-05-05 00:49:58 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:49:58.912776 | orchestrator | 2025-05-05 00:49:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:49:58.914647 | orchestrator | 2025-05-05 00:49:58 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:49:58.914990 | orchestrator | 2025-05-05 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:01.969675 | orchestrator | 2025-05-05 00:50:01 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:01.975929 | orchestrator | 2025-05-05 00:50:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:01.978821 | orchestrator | 2025-05-05 00:50:01 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:01.979654 | orchestrator | 2025-05-05 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:05.048239 | orchestrator | 2025-05-05 00:50:05 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:05.049607 | orchestrator | 2025-05-05 00:50:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:05.051301 | orchestrator | 2025-05-05 00:50:05 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:08.098909 | orchestrator | 2025-05-05 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:08.099046 | orchestrator | 2025-05-05 00:50:08 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:08.099975 | orchestrator | 2025-05-05 00:50:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:08.101538 | orchestrator | 2025-05-05 00:50:08 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:11.147589 | orchestrator | 2025-05-05 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:11.147726 | orchestrator | 2025-05-05 00:50:11 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:11.148329 | orchestrator | 2025-05-05 00:50:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:11.148851 | orchestrator | 2025-05-05 00:50:11 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:14.201276 | orchestrator | 2025-05-05 00:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:14.201396 | orchestrator | 2025-05-05 00:50:14 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:14.201855 | orchestrator | 2025-05-05 00:50:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:14.202938 | orchestrator | 2025-05-05 00:50:14 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:17.243516 | orchestrator | 2025-05-05 00:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:17.243651 | orchestrator | 2025-05-05 00:50:17 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:17.244512 | orchestrator | 2025-05-05 00:50:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:17.245486 | orchestrator | 2025-05-05 00:50:17 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:20.295047 | orchestrator | 2025-05-05 00:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:20.295233 | orchestrator | 2025-05-05 00:50:20 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:20.295323 | orchestrator | 2025-05-05 00:50:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:20.296304 | orchestrator | 2025-05-05 00:50:20 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:23.352486 | orchestrator | 2025-05-05 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:23.352644 | orchestrator | 2025-05-05 00:50:23 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:23.353729 | orchestrator | 2025-05-05 00:50:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:23.354665 | orchestrator | 2025-05-05 00:50:23 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:26.395287 | orchestrator | 2025-05-05 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:26.395482 | orchestrator | 2025-05-05 00:50:26 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:26.396232 | orchestrator | 2025-05-05 00:50:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:26.398189 | orchestrator | 2025-05-05 00:50:26 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:26.398499 | orchestrator | 2025-05-05 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:29.432991 | orchestrator | 2025-05-05 00:50:29 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:29.434134 | orchestrator | 2025-05-05 00:50:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:29.435514 | orchestrator | 2025-05-05 00:50:29 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:29.435830 | orchestrator | 2025-05-05 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:32.482489 | orchestrator | 2025-05-05 00:50:32 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:32.483281 | orchestrator | 2025-05-05 00:50:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:32.485918 | orchestrator | 2025-05-05 00:50:32 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:35.538950 | orchestrator | 2025-05-05 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:35.539094 | orchestrator | 2025-05-05 00:50:35 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:35.539242 | orchestrator | 2025-05-05 00:50:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:35.540671 | orchestrator | 2025-05-05 00:50:35 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:38.583815 | orchestrator | 2025-05-05 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:38.584030 | orchestrator | 2025-05-05 00:50:38 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:38.586493 | orchestrator | 2025-05-05 00:50:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:38.587588 | orchestrator | 2025-05-05 00:50:38 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:41.634902 | orchestrator | 2025-05-05 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:41.635045 | orchestrator | 2025-05-05 00:50:41 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:41.636092 | orchestrator | 2025-05-05 00:50:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:41.637601 | orchestrator | 2025-05-05 00:50:41 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:44.695027 | orchestrator | 2025-05-05 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:44.695197 | orchestrator | 2025-05-05 00:50:44 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:44.695681 | orchestrator | 2025-05-05 00:50:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:44.697003 | orchestrator | 2025-05-05 00:50:44 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:47.749788 | orchestrator | 2025-05-05 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:47.749977 | orchestrator | 2025-05-05 00:50:47 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:47.750178 | orchestrator | 2025-05-05 00:50:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:47.750222 | orchestrator | 2025-05-05 00:50:47 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:50.793081 | orchestrator | 2025-05-05 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:50.793229 | orchestrator | 2025-05-05 00:50:50 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:50.798732 | orchestrator | 2025-05-05 00:50:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:50.800951 | orchestrator | 2025-05-05 00:50:50 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:53.841945 | orchestrator | 2025-05-05 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:53.842150 | orchestrator | 2025-05-05 00:50:53 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:53.844551 | orchestrator | 2025-05-05 00:50:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:53.844590 | orchestrator | 2025-05-05 00:50:53 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:56.898213 | orchestrator | 2025-05-05 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:56.898375 | orchestrator | 2025-05-05 00:50:56 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:56.898980 | orchestrator | 2025-05-05 00:50:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:56.900170 | orchestrator | 2025-05-05 00:50:56 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:50:59.936523 | orchestrator | 2025-05-05 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:50:59.936708 | orchestrator | 2025-05-05 00:50:59 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:50:59.937272 | orchestrator | 2025-05-05 00:50:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:50:59.938586 | orchestrator | 2025-05-05 00:50:59 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:02.990337 | orchestrator | 2025-05-05 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:02.990559 | orchestrator | 2025-05-05 00:51:02 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:02.990650 | orchestrator | 2025-05-05 00:51:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:02.991722 | orchestrator | 2025-05-05 00:51:02 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:02.991844 | orchestrator | 2025-05-05 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:06.057641 | orchestrator | 2025-05-05 00:51:06 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:06.057888 | orchestrator | 2025-05-05 00:51:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:06.058750 | orchestrator | 2025-05-05 00:51:06 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:09.098201 | orchestrator | 2025-05-05 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:09.098342 | orchestrator | 2025-05-05 00:51:09 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:09.098929 | orchestrator | 2025-05-05 00:51:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:09.105515 | orchestrator | 2025-05-05 00:51:09 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:12.140931 | orchestrator | 2025-05-05 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:12.141072 | orchestrator | 2025-05-05 00:51:12 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:12.142669 | orchestrator | 2025-05-05 00:51:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:12.144523 | orchestrator | 2025-05-05 00:51:12 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:15.201109 | orchestrator | 2025-05-05 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:15.201257 | orchestrator | 2025-05-05 00:51:15 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:18.248024 | orchestrator | 2025-05-05 00:51:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:18.248152 | orchestrator | 2025-05-05 00:51:15 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:18.248172 | orchestrator | 2025-05-05 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:18.248216 | orchestrator | 2025-05-05 00:51:18 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:18.248530 | orchestrator | 2025-05-05 00:51:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:18.250126 | orchestrator | 2025-05-05 00:51:18 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:21.299495 | orchestrator | 2025-05-05 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:21.299626 | orchestrator | 2025-05-05 00:51:21 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:21.300308 | orchestrator | 2025-05-05 00:51:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:21.300748 | orchestrator | 2025-05-05 00:51:21 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:21.300984 | orchestrator | 2025-05-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:24.349434 | orchestrator | 2025-05-05 00:51:24 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:24.350513 | orchestrator | 2025-05-05 00:51:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:24.351552 | orchestrator | 2025-05-05 00:51:24 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:27.410454 | orchestrator | 2025-05-05 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:27.410591 | orchestrator | 2025-05-05 00:51:27 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:27.412034 | orchestrator | 2025-05-05 00:51:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:27.413052 | orchestrator | 2025-05-05 00:51:27 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:27.413249 | orchestrator | 2025-05-05 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:30.481768 | orchestrator | 2025-05-05 00:51:30 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:30.484159 | orchestrator | 2025-05-05 00:51:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:30.489192 | orchestrator | 2025-05-05 00:51:30 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:30.490077 | orchestrator | 2025-05-05 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:33.538495 | orchestrator | 2025-05-05 00:51:33 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:33.540681 | orchestrator | 2025-05-05 00:51:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:36.592507 | orchestrator | 2025-05-05 00:51:33 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:36.592635 | orchestrator | 2025-05-05 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:36.592672 | orchestrator | 2025-05-05 00:51:36 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:36.593603 | orchestrator | 2025-05-05 00:51:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:36.595037 | orchestrator | 2025-05-05 00:51:36 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:36.595288 | orchestrator | 2025-05-05 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:39.633818 | orchestrator | 2025-05-05 00:51:39 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:39.637121 | orchestrator | 2025-05-05 00:51:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:39.637942 | orchestrator | 2025-05-05 00:51:39 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:42.690861 | orchestrator | 2025-05-05 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:42.691044 | orchestrator | 2025-05-05 00:51:42 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:42.693902 | orchestrator | 2025-05-05 00:51:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:42.693999 | orchestrator | 2025-05-05 00:51:42 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:42.694443 | orchestrator | 2025-05-05 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:45.744228 | orchestrator | 2025-05-05 00:51:45 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:45.745815 | orchestrator | 2025-05-05 00:51:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:45.748089 | orchestrator | 2025-05-05 00:51:45 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:45.748644 | orchestrator | 2025-05-05 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:48.795267 | orchestrator | 2025-05-05 00:51:48 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:48.798559 | orchestrator | 2025-05-05 00:51:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:48.799210 | orchestrator | 2025-05-05 00:51:48 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:51.853455 | orchestrator | 2025-05-05 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:51.853592 | orchestrator | 2025-05-05 00:51:51 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:51.855778 | orchestrator | 2025-05-05 00:51:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:51.857130 | orchestrator | 2025-05-05 00:51:51 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:51.857710 | orchestrator | 2025-05-05 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:54.911046 | orchestrator | 2025-05-05 00:51:54 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:54.912850 | orchestrator | 2025-05-05 00:51:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:54.914547 | orchestrator | 2025-05-05 00:51:54 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:51:54.914763 | orchestrator | 2025-05-05 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:51:57.974097 | orchestrator | 2025-05-05 00:51:57 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:51:57.974322 | orchestrator | 2025-05-05 00:51:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:51:57.976632 | orchestrator | 2025-05-05 00:51:57 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:01.036992 | orchestrator | 2025-05-05 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:01.037110 | orchestrator | 2025-05-05 00:52:01 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:01.038837 | orchestrator | 2025-05-05 00:52:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:01.039711 | orchestrator | 2025-05-05 00:52:01 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:01.039975 | orchestrator | 2025-05-05 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:04.105708 | orchestrator | 2025-05-05 00:52:04 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:04.107678 | orchestrator | 2025-05-05 00:52:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:04.110338 | orchestrator | 2025-05-05 00:52:04 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:04.110520 | orchestrator | 2025-05-05 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:07.174357 | orchestrator | 2025-05-05 00:52:07 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:07.175886 | orchestrator | 2025-05-05 00:52:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:07.175949 | orchestrator | 2025-05-05 00:52:07 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:10.236648 | orchestrator | 2025-05-05 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:10.236786 | orchestrator | 2025-05-05 00:52:10 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:10.237965 | orchestrator | 2025-05-05 00:52:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:10.238084 | orchestrator | 2025-05-05 00:52:10 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:13.294245 | orchestrator | 2025-05-05 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:13.294390 | orchestrator | 2025-05-05 00:52:13 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:13.295715 | orchestrator | 2025-05-05 00:52:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:13.298208 | orchestrator | 2025-05-05 00:52:13 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:13.298812 | orchestrator | 2025-05-05 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:16.368833 | orchestrator | 2025-05-05 00:52:16 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:16.369502 | orchestrator | 2025-05-05 00:52:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:16.369549 | orchestrator | 2025-05-05 00:52:16 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:19.412953 | orchestrator | 2025-05-05 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:19.413100 | orchestrator | 2025-05-05 00:52:19 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:19.414014 | orchestrator | 2025-05-05 00:52:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:19.419040 | orchestrator | 2025-05-05 00:52:19 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:22.471472 | orchestrator | 2025-05-05 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:22.471616 | orchestrator | 2025-05-05 00:52:22 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:22.472438 | orchestrator | 2025-05-05 00:52:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:22.474307 | orchestrator | 2025-05-05 00:52:22 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:22.474757 | orchestrator | 2025-05-05 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:25.526294 | orchestrator | 2025-05-05 00:52:25 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:25.528891 | orchestrator | 2025-05-05 00:52:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:25.531243 | orchestrator | 2025-05-05 00:52:25 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:28.592913 | orchestrator | 2025-05-05 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:28.593055 | orchestrator | 2025-05-05 00:52:28 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:28.593614 | orchestrator | 2025-05-05 00:52:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:28.596522 | orchestrator | 2025-05-05 00:52:28 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:31.712507 | orchestrator | 2025-05-05 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:31.712647 | orchestrator | 2025-05-05 00:52:31 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:31.714751 | orchestrator | 2025-05-05 00:52:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:34.785512 | orchestrator | 2025-05-05 00:52:31 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:34.785663 | orchestrator | 2025-05-05 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:34.785712 | orchestrator | 2025-05-05 00:52:34 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:34.787468 | orchestrator | 2025-05-05 00:52:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:34.788640 | orchestrator | 2025-05-05 00:52:34 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:37.844346 | orchestrator | 2025-05-05 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:37.844563 | orchestrator | 2025-05-05 00:52:37 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:37.845010 | orchestrator | 2025-05-05 00:52:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:37.845886 | orchestrator | 2025-05-05 00:52:37 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:40.936280 | orchestrator | 2025-05-05 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:40.936474 | orchestrator | 2025-05-05 00:52:40 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:40.937644 | orchestrator | 2025-05-05 00:52:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:40.940456 | orchestrator | 2025-05-05 00:52:40 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:44.007611 | orchestrator | 2025-05-05 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:44.007770 | orchestrator | 2025-05-05 00:52:44 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:44.009028 | orchestrator | 2025-05-05 00:52:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:44.010671 | orchestrator | 2025-05-05 00:52:44 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:47.081192 | orchestrator | 2025-05-05 00:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:47.081336 | orchestrator | 2025-05-05 00:52:47 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:47.083308 | orchestrator | 2025-05-05 00:52:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:47.083864 | orchestrator | 2025-05-05 00:52:47 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:50.135491 | orchestrator | 2025-05-05 00:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:50.135657 | orchestrator | 2025-05-05 00:52:50 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:50.135763 | orchestrator | 2025-05-05 00:52:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:50.136639 | orchestrator | 2025-05-05 00:52:50 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:53.179760 | orchestrator | 2025-05-05 00:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:53.179926 | orchestrator | 2025-05-05 00:52:53 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:53.181094 | orchestrator | 2025-05-05 00:52:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:53.182340 | orchestrator | 2025-05-05 00:52:53 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:56.240628 | orchestrator | 2025-05-05 00:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:56.240809 | orchestrator | 2025-05-05 00:52:56 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:56.242836 | orchestrator | 2025-05-05 00:52:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:56.244493 | orchestrator | 2025-05-05 00:52:56 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:52:59.282197 | orchestrator | 2025-05-05 00:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:52:59.282364 | orchestrator | 2025-05-05 00:52:59 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:52:59.283044 | orchestrator | 2025-05-05 00:52:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:52:59.283966 | orchestrator | 2025-05-05 00:52:59 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:53:02.337448 | orchestrator | 2025-05-05 00:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:02.337672 | orchestrator | 2025-05-05 00:53:02 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:02.337770 | orchestrator | 2025-05-05 00:53:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:02.337794 | orchestrator | 2025-05-05 00:53:02 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state STARTED 2025-05-05 00:53:02.338755 | orchestrator | 2025-05-05 00:53:02 | INFO  | Task 6dac9070-b623-40e4-a741-ce4498a2c2af is in state STARTED 2025-05-05 00:53:05.397982 | orchestrator | 2025-05-05 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:05.398182 | orchestrator | 2025-05-05 00:53:05 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:05.401176 | orchestrator | 2025-05-05 00:53:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:05.416291 | orchestrator | 2025-05-05 00:53:05.416451 | orchestrator | 2025-05-05 00:53:05.416473 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:53:05.416489 | orchestrator | 2025-05-05 00:53:05.416504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:53:05.416518 | orchestrator | Monday 05 May 2025 00:46:03 +0000 (0:00:00.220) 0:00:00.220 ************ 2025-05-05 00:53:05.416533 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.416633 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.416652 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.416666 | orchestrator | 2025-05-05 00:53:05.416743 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:53:05.416796 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.537) 0:00:00.758 ************ 2025-05-05 00:53:05.416967 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-05 00:53:05.416984 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-05 00:53:05.417000 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-05 00:53:05.417016 | orchestrator | 2025-05-05 00:53:05.417032 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-05 00:53:05.417048 | orchestrator | 2025-05-05 00:53:05.417063 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-05 00:53:05.417078 | orchestrator | Monday 05 May 2025 00:46:04 +0000 (0:00:00.366) 0:00:01.124 ************ 2025-05-05 00:53:05.417096 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.417112 | orchestrator | 2025-05-05 00:53:05.417128 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-05 00:53:05.417145 | orchestrator | Monday 05 May 2025 00:46:05 +0000 (0:00:00.923) 0:00:02.048 ************ 2025-05-05 00:53:05.417161 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.417176 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.417190 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.417204 | orchestrator | 2025-05-05 00:53:05.417218 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-05 00:53:05.417232 | orchestrator | Monday 05 May 2025 00:46:07 +0000 (0:00:01.990) 0:00:04.038 ************ 2025-05-05 00:53:05.417246 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.417260 | orchestrator | 2025-05-05 00:53:05.417301 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-05 00:53:05.417410 | orchestrator | Monday 05 May 2025 00:46:08 +0000 (0:00:00.976) 0:00:05.015 ************ 2025-05-05 00:53:05.417477 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.417494 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.417509 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.417523 | orchestrator | 2025-05-05 00:53:05.417590 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-05 00:53:05.417604 | orchestrator | Monday 05 May 2025 00:46:09 +0000 (0:00:00.804) 0:00:05.819 ************ 2025-05-05 00:53:05.417619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-05 00:53:05.417634 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-05 00:53:05.417648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-05 00:53:05.417663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-05 00:53:05.417677 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-05 00:53:05.417692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-05 00:53:05.417720 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-05 00:53:05.417736 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-05 00:53:05.417750 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-05 00:53:05.417764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-05 00:53:05.417778 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-05 00:53:05.417792 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-05 00:53:05.417806 | orchestrator | 2025-05-05 00:53:05.417820 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-05 00:53:05.417834 | orchestrator | Monday 05 May 2025 00:46:11 +0000 (0:00:02.592) 0:00:08.412 ************ 2025-05-05 00:53:05.417848 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-05 00:53:05.417868 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-05 00:53:05.417882 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-05 00:53:05.417961 | orchestrator | 2025-05-05 00:53:05.417977 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-05 00:53:05.417991 | orchestrator | Monday 05 May 2025 00:46:12 +0000 (0:00:00.959) 0:00:09.371 ************ 2025-05-05 00:53:05.418005 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-05 00:53:05.418077 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-05 00:53:05.418097 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-05 00:53:05.418111 | orchestrator | 2025-05-05 00:53:05.418126 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-05 00:53:05.418140 | orchestrator | Monday 05 May 2025 00:46:14 +0000 (0:00:01.648) 0:00:11.019 ************ 2025-05-05 00:53:05.418154 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-05 00:53:05.418168 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.418260 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-05 00:53:05.418278 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.418293 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-05 00:53:05.418307 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.418322 | orchestrator | 2025-05-05 00:53:05.418337 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-05 00:53:05.418352 | orchestrator | Monday 05 May 2025 00:46:15 +0000 (0:00:00.708) 0:00:11.728 ************ 2025-05-05 00:53:05.418401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.418424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.418440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.418455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.418472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.418495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.418572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.418591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.418606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.418622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.418637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.418653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.418693 | orchestrator | 2025-05-05 00:53:05.418710 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-05 00:53:05.418725 | orchestrator | Monday 05 May 2025 00:46:17 +0000 (0:00:02.198) 0:00:13.927 ************ 2025-05-05 00:53:05.418747 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.418761 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.418776 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.418862 | orchestrator | 2025-05-05 00:53:05.418886 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-05 00:53:05.418901 | orchestrator | Monday 05 May 2025 00:46:18 +0000 (0:00:01.501) 0:00:15.428 ************ 2025-05-05 00:53:05.418916 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-05 00:53:05.418930 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-05 00:53:05.418944 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-05 00:53:05.418958 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-05 00:53:05.418972 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-05 00:53:05.418986 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-05 00:53:05.419001 | orchestrator | 2025-05-05 00:53:05.419018 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-05 00:53:05.419034 | orchestrator | Monday 05 May 2025 00:46:21 +0000 (0:00:02.646) 0:00:18.075 ************ 2025-05-05 00:53:05.419049 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.419097 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.419113 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.419151 | orchestrator | 2025-05-05 00:53:05.419167 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-05 00:53:05.419181 | orchestrator | Monday 05 May 2025 00:46:23 +0000 (0:00:01.852) 0:00:19.928 ************ 2025-05-05 00:53:05.419195 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.419210 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.419224 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.419283 | orchestrator | 2025-05-05 00:53:05.419299 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-05 00:53:05.419314 | orchestrator | Monday 05 May 2025 00:46:26 +0000 (0:00:03.287) 0:00:23.215 ************ 2025-05-05 00:53:05.419329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.419345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.419360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.419531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.419560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.419578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.419594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.419610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.419624 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.419684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.419703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.419725 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.419741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.419764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.419779 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.419794 | orchestrator | 2025-05-05 00:53:05.419808 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-05 00:53:05.419822 | orchestrator | Monday 05 May 2025 00:46:30 +0000 (0:00:03.605) 0:00:26.820 ************ 2025-05-05 00:53:05.419837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.419852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.419867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.419951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.420039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.420135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.420178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.420192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.420223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.420237 | orchestrator | 2025-05-05 00:53:05.420250 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-05 00:53:05.420263 | orchestrator | Monday 05 May 2025 00:46:34 +0000 (0:00:04.349) 0:00:31.169 ************ 2025-05-05 00:53:05.420276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql2025-05-05 00:53:05 | INFO  | Task 864b3aac-b31a-4b90-bb2a-4f94281aa103 is in state SUCCESS 2025-05-05 00:53:05.420419 | orchestrator | /', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.420436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.420449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.420463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.420490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.420504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.420517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.420531 | orchestrator | 2025-05-05 00:53:05.420591 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-05 00:53:05.420608 | orchestrator | Monday 05 May 2025 00:46:37 +0000 (0:00:03.184) 0:00:34.354 ************ 2025-05-05 00:53:05.420621 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-05 00:53:05.420634 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-05 00:53:05.420800 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-05 00:53:05.420824 | orchestrator | 2025-05-05 00:53:05.420841 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-05 00:53:05.420854 | orchestrator | Monday 05 May 2025 00:46:39 +0000 (0:00:02.096) 0:00:36.450 ************ 2025-05-05 00:53:05.420867 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-05 00:53:05.420880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-05 00:53:05.420893 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-05 00:53:05.420905 | orchestrator | 2025-05-05 00:53:05.420948 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-05 00:53:05.420959 | orchestrator | Monday 05 May 2025 00:46:43 +0000 (0:00:04.004) 0:00:40.454 ************ 2025-05-05 00:53:05.420969 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.420980 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.420990 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.421000 | orchestrator | 2025-05-05 00:53:05.421011 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-05 00:53:05.421027 | orchestrator | Monday 05 May 2025 00:46:45 +0000 (0:00:02.017) 0:00:42.472 ************ 2025-05-05 00:53:05.421038 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-05 00:53:05.421049 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-05 00:53:05.421060 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-05 00:53:05.421071 | orchestrator | 2025-05-05 00:53:05.421081 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-05 00:53:05.421092 | orchestrator | Monday 05 May 2025 00:46:47 +0000 (0:00:02.151) 0:00:44.623 ************ 2025-05-05 00:53:05.421102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-05 00:53:05.421113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-05 00:53:05.421123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-05 00:53:05.421133 | orchestrator | 2025-05-05 00:53:05.421144 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-05 00:53:05.421166 | orchestrator | Monday 05 May 2025 00:46:50 +0000 (0:00:02.303) 0:00:46.926 ************ 2025-05-05 00:53:05.421197 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-05 00:53:05.421240 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-05 00:53:05.421251 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-05 00:53:05.421262 | orchestrator | 2025-05-05 00:53:05.421302 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-05 00:53:05.421315 | orchestrator | Monday 05 May 2025 00:46:52 +0000 (0:00:01.905) 0:00:48.831 ************ 2025-05-05 00:53:05.421326 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-05 00:53:05.421337 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-05 00:53:05.421347 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-05 00:53:05.421358 | orchestrator | 2025-05-05 00:53:05.421368 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-05 00:53:05.421397 | orchestrator | Monday 05 May 2025 00:46:54 +0000 (0:00:01.838) 0:00:50.670 ************ 2025-05-05 00:53:05.421408 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.421418 | orchestrator | 2025-05-05 00:53:05.421429 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-05 00:53:05.421507 | orchestrator | Monday 05 May 2025 00:46:55 +0000 (0:00:01.067) 0:00:51.738 ************ 2025-05-05 00:53:05.421518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.421537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.421555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.421566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.421577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.421589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.421600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.421619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.421635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.421652 | orchestrator | 2025-05-05 00:53:05.421663 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-05 00:53:05.421674 | orchestrator | Monday 05 May 2025 00:46:58 +0000 (0:00:03.729) 0:00:55.467 ************ 2025-05-05 00:53:05.421685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.421695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.421810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.421821 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.421832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.421843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.421871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.421882 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.421892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.421903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.421914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.421924 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.421934 | orchestrator | 2025-05-05 00:53:05.421944 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-05 00:53:05.421955 | orchestrator | Monday 05 May 2025 00:46:59 +0000 (0:00:00.623) 0:00:56.091 ************ 2025-05-05 00:53:05.421965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.421976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.426217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.426335 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.426356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.426403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.426419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.426432 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.426445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-05 00:53:05.426459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-05 00:53:05.426500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-05 00:53:05.426514 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.426527 | orchestrator | 2025-05-05 00:53:05.426554 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-05 00:53:05.426569 | orchestrator | Monday 05 May 2025 00:47:00 +0000 (0:00:01.188) 0:00:57.279 ************ 2025-05-05 00:53:05.426582 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-05 00:53:05.426596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-05 00:53:05.426609 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-05 00:53:05.426621 | orchestrator | 2025-05-05 00:53:05.426634 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-05 00:53:05.426647 | orchestrator | Monday 05 May 2025 00:47:02 +0000 (0:00:02.117) 0:00:59.397 ************ 2025-05-05 00:53:05.426661 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-05 00:53:05.426674 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-05 00:53:05.426687 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-05 00:53:05.426699 | orchestrator | 2025-05-05 00:53:05.426712 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-05 00:53:05.426725 | orchestrator | Monday 05 May 2025 00:47:04 +0000 (0:00:02.068) 0:01:01.465 ************ 2025-05-05 00:53:05.426738 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-05 00:53:05.426750 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-05 00:53:05.426763 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-05 00:53:05.426775 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-05 00:53:05.426788 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.426802 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-05 00:53:05.426814 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.426827 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-05 00:53:05.426840 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.426852 | orchestrator | 2025-05-05 00:53:05.426865 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-05 00:53:05.426878 | orchestrator | Monday 05 May 2025 00:47:06 +0000 (0:00:01.640) 0:01:03.106 ************ 2025-05-05 00:53:05.426891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.426913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.426926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-05 00:53:05.426947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.426965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.426979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-05 00:53:05.426992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.427005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.427026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.427048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.427063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-05 00:53:05.427081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3', '__omit_place_holder__097a4bf0dd09a2f0b3e8498187e218e2c6913fe3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-05 00:53:05.427095 | orchestrator | 2025-05-05 00:53:05.427109 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-05 00:53:05.427121 | orchestrator | Monday 05 May 2025 00:47:09 +0000 (0:00:03.362) 0:01:06.469 ************ 2025-05-05 00:53:05.427135 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.427147 | orchestrator | 2025-05-05 00:53:05.427160 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-05 00:53:05.427173 | orchestrator | Monday 05 May 2025 00:47:10 +0000 (0:00:00.779) 0:01:07.248 ************ 2025-05-05 00:53:05.427185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-05 00:53:05.427207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.427222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-05 00:53:05.427291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.427304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-05 00:53:05.427396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.427411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427438 | orchestrator | 2025-05-05 00:53:05.427450 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-05 00:53:05.427464 | orchestrator | Monday 05 May 2025 00:47:14 +0000 (0:00:03.385) 0:01:10.633 ************ 2025-05-05 00:53:05.427483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-05 00:53:05.427496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.427509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427543 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.427557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-05 00:53:05.427570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.427588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427614 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.427628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-05 00:53:05.427675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.427690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.427722 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.427735 | orchestrator | 2025-05-05 00:53:05.427747 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-05 00:53:05.427760 | orchestrator | Monday 05 May 2025 00:47:14 +0000 (0:00:00.544) 0:01:11.178 ************ 2025-05-05 00:53:05.427774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-05 00:53:05.427787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-05 00:53:05.427799 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.427812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-05 00:53:05.427824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-05 00:53:05.427837 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.427856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-05 00:53:05.427869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-05 00:53:05.427882 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.427894 | orchestrator | 2025-05-05 00:53:05.427906 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-05 00:53:05.427919 | orchestrator | Monday 05 May 2025 00:47:15 +0000 (0:00:00.785) 0:01:11.963 ************ 2025-05-05 00:53:05.427931 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.427944 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.427956 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.427969 | orchestrator | 2025-05-05 00:53:05.427981 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-05 00:53:05.427993 | orchestrator | Monday 05 May 2025 00:47:16 +0000 (0:00:01.160) 0:01:13.124 ************ 2025-05-05 00:53:05.428005 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.428018 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.428030 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.428043 | orchestrator | 2025-05-05 00:53:05.428055 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-05 00:53:05.428068 | orchestrator | Monday 05 May 2025 00:47:18 +0000 (0:00:01.786) 0:01:14.910 ************ 2025-05-05 00:53:05.428081 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.428093 | orchestrator | 2025-05-05 00:53:05.428106 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-05 00:53:05.428118 | orchestrator | Monday 05 May 2025 00:47:19 +0000 (0:00:00.765) 0:01:15.675 ************ 2025-05-05 00:53:05.428151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.428201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.428271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.428408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428453 | orchestrator | 2025-05-05 00:53:05.428466 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-05 00:53:05.428480 | orchestrator | Monday 05 May 2025 00:47:23 +0000 (0:00:04.283) 0:01:19.958 ************ 2025-05-05 00:53:05.428493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.428516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428563 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.428577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.428590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428616 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.428636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.428693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.428754 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.428777 | orchestrator | 2025-05-05 00:53:05.428798 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-05 00:53:05.428817 | orchestrator | Monday 05 May 2025 00:47:24 +0000 (0:00:01.132) 0:01:21.091 ************ 2025-05-05 00:53:05.428838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-05 00:53:05.428860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-05 00:53:05.428883 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.428905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-05 00:53:05.428936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-05 00:53:05.428950 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.428963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-05 00:53:05.428975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-05 00:53:05.428988 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.429009 | orchestrator | 2025-05-05 00:53:05.429022 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-05 00:53:05.429035 | orchestrator | Monday 05 May 2025 00:47:25 +0000 (0:00:00.962) 0:01:22.054 ************ 2025-05-05 00:53:05.429047 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.429059 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.429072 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.429084 | orchestrator | 2025-05-05 00:53:05.429096 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-05 00:53:05.429109 | orchestrator | Monday 05 May 2025 00:47:26 +0000 (0:00:01.291) 0:01:23.345 ************ 2025-05-05 00:53:05.429121 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.429134 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.429146 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.429158 | orchestrator | 2025-05-05 00:53:05.429170 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-05 00:53:05.429183 | orchestrator | Monday 05 May 2025 00:47:28 +0000 (0:00:01.936) 0:01:25.281 ************ 2025-05-05 00:53:05.429195 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.429227 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.429240 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.429253 | orchestrator | 2025-05-05 00:53:05.429266 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-05 00:53:05.429278 | orchestrator | Monday 05 May 2025 00:47:28 +0000 (0:00:00.250) 0:01:25.531 ************ 2025-05-05 00:53:05.429291 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.429303 | orchestrator | 2025-05-05 00:53:05.429315 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-05 00:53:05.429327 | orchestrator | Monday 05 May 2025 00:47:29 +0000 (0:00:00.838) 0:01:26.369 ************ 2025-05-05 00:53:05.429340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-05 00:53:05.429355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-05 00:53:05.429424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-05 00:53:05.429450 | orchestrator | 2025-05-05 00:53:05.429463 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-05 00:53:05.429476 | orchestrator | Monday 05 May 2025 00:47:32 +0000 (0:00:02.949) 0:01:29.319 ************ 2025-05-05 00:53:05.429489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-05 00:53:05.429502 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.429524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-05 00:53:05.429538 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.429550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-05 00:53:05.429563 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.429576 | orchestrator | 2025-05-05 00:53:05.429589 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-05 00:53:05.429601 | orchestrator | Monday 05 May 2025 00:47:34 +0000 (0:00:02.197) 0:01:31.517 ************ 2025-05-05 00:53:05.429615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-05 00:53:05.429637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-05 00:53:05.429652 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.429664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-05 00:53:05.429678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-05 00:53:05.429691 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.429703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-05 00:53:05.429733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-05 00:53:05.429756 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.429776 | orchestrator | 2025-05-05 00:53:05.429797 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-05 00:53:05.429819 | orchestrator | Monday 05 May 2025 00:47:37 +0000 (0:00:02.284) 0:01:33.802 ************ 2025-05-05 00:53:05.429840 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.429861 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.429874 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.429887 | orchestrator | 2025-05-05 00:53:05.429899 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-05 00:53:05.429912 | orchestrator | Monday 05 May 2025 00:47:37 +0000 (0:00:00.575) 0:01:34.377 ************ 2025-05-05 00:53:05.429928 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.429949 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.429970 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.429990 | orchestrator | 2025-05-05 00:53:05.430012 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-05 00:53:05.430081 | orchestrator | Monday 05 May 2025 00:47:38 +0000 (0:00:01.083) 0:01:35.461 ************ 2025-05-05 00:53:05.430098 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.430111 | orchestrator | 2025-05-05 00:53:05.430125 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-05 00:53:05.430137 | orchestrator | Monday 05 May 2025 00:47:39 +0000 (0:00:00.687) 0:01:36.149 ************ 2025-05-05 00:53:05.430152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.430190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.430275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.430333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430423 | orchestrator | 2025-05-05 00:53:05.430436 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-05 00:53:05.430454 | orchestrator | Monday 05 May 2025 00:47:43 +0000 (0:00:03.566) 0:01:39.715 ************ 2025-05-05 00:53:05.430467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.430487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430543 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.430557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.430570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430630 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.430651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.430664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.430703 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.430724 | orchestrator | 2025-05-05 00:53:05.430748 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-05 00:53:05.430771 | orchestrator | Monday 05 May 2025 00:47:44 +0000 (0:00:00.991) 0:01:40.706 ************ 2025-05-05 00:53:05.430803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-05 00:53:05.430827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-05 00:53:05.430852 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.430888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-05 00:53:05.430912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-05 00:53:05.430930 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.430943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-05 00:53:05.430955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-05 00:53:05.430968 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.430981 | orchestrator | 2025-05-05 00:53:05.430994 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-05 00:53:05.431007 | orchestrator | Monday 05 May 2025 00:47:45 +0000 (0:00:01.196) 0:01:41.903 ************ 2025-05-05 00:53:05.431019 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.431031 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.431044 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.431056 | orchestrator | 2025-05-05 00:53:05.431069 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-05 00:53:05.431081 | orchestrator | Monday 05 May 2025 00:47:46 +0000 (0:00:01.395) 0:01:43.298 ************ 2025-05-05 00:53:05.431093 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.431106 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.431118 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.431130 | orchestrator | 2025-05-05 00:53:05.431143 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-05 00:53:05.431155 | orchestrator | Monday 05 May 2025 00:47:48 +0000 (0:00:02.241) 0:01:45.540 ************ 2025-05-05 00:53:05.431167 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.431180 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.431192 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.431205 | orchestrator | 2025-05-05 00:53:05.431217 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-05 00:53:05.431229 | orchestrator | Monday 05 May 2025 00:47:49 +0000 (0:00:00.321) 0:01:45.861 ************ 2025-05-05 00:53:05.431242 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.431254 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.431273 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.431286 | orchestrator | 2025-05-05 00:53:05.431298 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-05 00:53:05.431311 | orchestrator | Monday 05 May 2025 00:47:49 +0000 (0:00:00.470) 0:01:46.332 ************ 2025-05-05 00:53:05.431323 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.431336 | orchestrator | 2025-05-05 00:53:05.431348 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-05 00:53:05.431361 | orchestrator | Monday 05 May 2025 00:47:50 +0000 (0:00:01.023) 0:01:47.356 ************ 2025-05-05 00:53:05.431396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 00:53:05.431442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 00:53:05.431457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 00:53:05.431471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 00:53:05.431484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 00:53:05.431760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 00:53:05.431774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431866 | orchestrator | 2025-05-05 00:53:05.431879 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-05 00:53:05.431892 | orchestrator | Monday 05 May 2025 00:47:55 +0000 (0:00:05.157) 0:01:52.513 ************ 2025-05-05 00:53:05.431905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 00:53:05.431919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 00:53:05.431932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.431976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432029 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.432042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 00:53:05.432055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 00:53:05.432080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432171 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.432192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 00:53:05.432237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 00:53:05.432261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.432336 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.432349 | orchestrator | 2025-05-05 00:53:05.432361 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-05 00:53:05.432431 | orchestrator | Monday 05 May 2025 00:47:56 +0000 (0:00:01.031) 0:01:53.544 ************ 2025-05-05 00:53:05.432456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-05 00:53:05.432470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-05 00:53:05.432483 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.432496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-05 00:53:05.432509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-05 00:53:05.432519 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.432530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-05 00:53:05.432541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-05 00:53:05.432551 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.432561 | orchestrator | 2025-05-05 00:53:05.432571 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-05 00:53:05.432582 | orchestrator | Monday 05 May 2025 00:47:58 +0000 (0:00:01.274) 0:01:54.819 ************ 2025-05-05 00:53:05.432592 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.432602 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.432612 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.432623 | orchestrator | 2025-05-05 00:53:05.432633 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-05 00:53:05.432643 | orchestrator | Monday 05 May 2025 00:47:59 +0000 (0:00:01.267) 0:01:56.086 ************ 2025-05-05 00:53:05.432653 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.432664 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.432674 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.432684 | orchestrator | 2025-05-05 00:53:05.432694 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-05 00:53:05.432705 | orchestrator | Monday 05 May 2025 00:48:01 +0000 (0:00:01.934) 0:01:58.021 ************ 2025-05-05 00:53:05.432715 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.432725 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.432742 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.432752 | orchestrator | 2025-05-05 00:53:05.432763 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-05 00:53:05.432773 | orchestrator | Monday 05 May 2025 00:48:01 +0000 (0:00:00.509) 0:01:58.530 ************ 2025-05-05 00:53:05.432783 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.432793 | orchestrator | 2025-05-05 00:53:05.432804 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-05 00:53:05.432814 | orchestrator | Monday 05 May 2025 00:48:02 +0000 (0:00:01.048) 0:01:59.579 ************ 2025-05-05 00:53:05.432834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 00:53:05.432852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.432879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 00:53:05.432906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 00:53:05.432925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.432951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.432963 | orchestrator | 2025-05-05 00:53:05.432973 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-05 00:53:05.432984 | orchestrator | Monday 05 May 2025 00:48:08 +0000 (0:00:05.314) 0:02:04.894 ************ 2025-05-05 00:53:05.433171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 00:53:05.433212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.433224 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.433236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 00:53:05.433262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.433291 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.433303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 00:53:05.433321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.433338 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.433349 | orchestrator | 2025-05-05 00:53:05.433360 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-05 00:53:05.433391 | orchestrator | Monday 05 May 2025 00:48:12 +0000 (0:00:03.978) 0:02:08.873 ************ 2025-05-05 00:53:05.433404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-05 00:53:05.433416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-05 00:53:05.433426 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.433437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-05 00:53:05.433449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-05 00:53:05.433465 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.433482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-05 00:53:05.433494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-05 00:53:05.433504 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.433515 | orchestrator | 2025-05-05 00:53:05.433525 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-05 00:53:05.433535 | orchestrator | Monday 05 May 2025 00:48:15 +0000 (0:00:03.422) 0:02:12.295 ************ 2025-05-05 00:53:05.433546 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.433556 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.433567 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.433578 | orchestrator | 2025-05-05 00:53:05.433588 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-05 00:53:05.433598 | orchestrator | Monday 05 May 2025 00:48:16 +0000 (0:00:01.103) 0:02:13.399 ************ 2025-05-05 00:53:05.433608 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.433619 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.433629 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.433640 | orchestrator | 2025-05-05 00:53:05.433650 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-05 00:53:05.433660 | orchestrator | Monday 05 May 2025 00:48:18 +0000 (0:00:01.771) 0:02:15.171 ************ 2025-05-05 00:53:05.433670 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.433681 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.433691 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.433701 | orchestrator | 2025-05-05 00:53:05.433711 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-05 00:53:05.433722 | orchestrator | Monday 05 May 2025 00:48:18 +0000 (0:00:00.375) 0:02:15.547 ************ 2025-05-05 00:53:05.433732 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.433743 | orchestrator | 2025-05-05 00:53:05.433753 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-05 00:53:05.433763 | orchestrator | Monday 05 May 2025 00:48:19 +0000 (0:00:01.055) 0:02:16.602 ************ 2025-05-05 00:53:05.433776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 00:53:05.433789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 00:53:05.433815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 00:53:05.433828 | orchestrator | 2025-05-05 00:53:05.433840 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-05 00:53:05.433856 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:03.763) 0:02:20.366 ************ 2025-05-05 00:53:05.433868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 00:53:05.433881 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.433893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 00:53:05.433905 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.433917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 00:53:05.433929 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.433941 | orchestrator | 2025-05-05 00:53:05.433953 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-05 00:53:05.433964 | orchestrator | Monday 05 May 2025 00:48:24 +0000 (0:00:00.341) 0:02:20.707 ************ 2025-05-05 00:53:05.433983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-05 00:53:05.434001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-05 00:53:05.434013 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.434070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-05 00:53:05.434084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-05 00:53:05.434096 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.434109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-05 00:53:05.434121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-05 00:53:05.434132 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.434143 | orchestrator | 2025-05-05 00:53:05.434154 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-05 00:53:05.434165 | orchestrator | Monday 05 May 2025 00:48:24 +0000 (0:00:00.789) 0:02:21.497 ************ 2025-05-05 00:53:05.434175 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.434186 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.434196 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.434206 | orchestrator | 2025-05-05 00:53:05.434216 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-05 00:53:05.434241 | orchestrator | Monday 05 May 2025 00:48:25 +0000 (0:00:01.121) 0:02:22.618 ************ 2025-05-05 00:53:05.434252 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.434262 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.434273 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.434283 | orchestrator | 2025-05-05 00:53:05.434293 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-05 00:53:05.434304 | orchestrator | Monday 05 May 2025 00:48:28 +0000 (0:00:02.091) 0:02:24.710 ************ 2025-05-05 00:53:05.434314 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.434324 | orchestrator | 2025-05-05 00:53:05.434335 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-05 00:53:05.434345 | orchestrator | Monday 05 May 2025 00:48:29 +0000 (0:00:01.439) 0:02:26.150 ************ 2025-05-05 00:53:05.434356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.434390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.434411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.434427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.434438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.434457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.434474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.434485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.434496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.434506 | orchestrator | 2025-05-05 00:53:05.434517 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-05 00:53:05.434527 | orchestrator | Monday 05 May 2025 00:48:36 +0000 (0:00:06.718) 0:02:32.868 ************ 2025-05-05 00:53:05.434543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.434554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.434578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.434590 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.434601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.434611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.434628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.434639 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.434649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.434676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.434688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.434698 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.434709 | orchestrator | 2025-05-05 00:53:05.434719 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-05 00:53:05.434729 | orchestrator | Monday 05 May 2025 00:48:37 +0000 (0:00:00.812) 0:02:33.681 ************ 2025-05-05 00:53:05.434740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434783 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.434794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434851 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.434862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-05 00:53:05.434904 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.434914 | orchestrator | 2025-05-05 00:53:05.434924 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-05 00:53:05.434934 | orchestrator | Monday 05 May 2025 00:48:38 +0000 (0:00:01.002) 0:02:34.683 ************ 2025-05-05 00:53:05.434944 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.434955 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.434969 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.434979 | orchestrator | 2025-05-05 00:53:05.434990 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-05 00:53:05.435000 | orchestrator | Monday 05 May 2025 00:48:39 +0000 (0:00:01.231) 0:02:35.915 ************ 2025-05-05 00:53:05.435011 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.435021 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.435031 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.435041 | orchestrator | 2025-05-05 00:53:05.435055 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-05 00:53:05.435066 | orchestrator | Monday 05 May 2025 00:48:41 +0000 (0:00:01.826) 0:02:37.741 ************ 2025-05-05 00:53:05.435077 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.435087 | orchestrator | 2025-05-05 00:53:05.435097 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-05 00:53:05.435107 | orchestrator | Monday 05 May 2025 00:48:41 +0000 (0:00:00.875) 0:02:38.617 ************ 2025-05-05 00:53:05.435131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:53:05.435150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:53:05.435175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:53:05.435199 | orchestrator | 2025-05-05 00:53:05.435209 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-05 00:53:05.435220 | orchestrator | Monday 05 May 2025 00:48:46 +0000 (0:00:04.322) 0:02:42.939 ************ 2025-05-05 00:53:05.435230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:53:05.435242 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.435258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:53:05.435282 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.435293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:53:05.435304 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.435318 | orchestrator | 2025-05-05 00:53:05.435328 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-05 00:53:05.435339 | orchestrator | Monday 05 May 2025 00:48:47 +0000 (0:00:00.845) 0:02:43.785 ************ 2025-05-05 00:53:05.435353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-05 00:53:05.435366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-05 00:53:05.435421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-05 00:53:05.435435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-05 00:53:05.435446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-05 00:53:05.435457 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.435473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-05 00:53:05.435485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-05 00:53:05.435495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-05 00:53:05.435506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-05 00:53:05.435517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-05 00:53:05.435527 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.435537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-05 00:53:05.435554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-05 00:53:05.435565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-05 00:53:05.435580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-05 00:53:05.435591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-05 00:53:05.435602 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.435612 | orchestrator | 2025-05-05 00:53:05.435623 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-05 00:53:05.435633 | orchestrator | Monday 05 May 2025 00:48:48 +0000 (0:00:01.316) 0:02:45.101 ************ 2025-05-05 00:53:05.435644 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.435654 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.435664 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.435674 | orchestrator | 2025-05-05 00:53:05.435684 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-05 00:53:05.435694 | orchestrator | Monday 05 May 2025 00:48:49 +0000 (0:00:01.440) 0:02:46.542 ************ 2025-05-05 00:53:05.435704 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.435714 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.435724 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.435734 | orchestrator | 2025-05-05 00:53:05.435744 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-05 00:53:05.435754 | orchestrator | Monday 05 May 2025 00:48:52 +0000 (0:00:02.203) 0:02:48.745 ************ 2025-05-05 00:53:05.435765 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.435775 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.435785 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.435795 | orchestrator | 2025-05-05 00:53:05.435805 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-05 00:53:05.435815 | orchestrator | Monday 05 May 2025 00:48:52 +0000 (0:00:00.481) 0:02:49.227 ************ 2025-05-05 00:53:05.435825 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.435835 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.435846 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.435856 | orchestrator | 2025-05-05 00:53:05.435866 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-05 00:53:05.435876 | orchestrator | Monday 05 May 2025 00:48:52 +0000 (0:00:00.327) 0:02:49.554 ************ 2025-05-05 00:53:05.435886 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.435896 | orchestrator | 2025-05-05 00:53:05.435907 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-05 00:53:05.435917 | orchestrator | Monday 05 May 2025 00:48:54 +0000 (0:00:01.236) 0:02:50.791 ************ 2025-05-05 00:53:05.435928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:53:05.435949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:53:05.435959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:53:05.435973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:53:05.435983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:53:05.435992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:53:05.436014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:53:05.436024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:53:05.436038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:53:05.436047 | orchestrator | 2025-05-05 00:53:05.436056 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-05 00:53:05.436065 | orchestrator | Monday 05 May 2025 00:48:58 +0000 (0:00:03.980) 0:02:54.771 ************ 2025-05-05 00:53:05.436074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:53:05.436089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:53:05.436104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:53:05.436113 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.436122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:53:05.436137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:53:05.436147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:53:05.436155 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.436164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:53:05.436185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:53:05.436194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:53:05.436204 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.436212 | orchestrator | 2025-05-05 00:53:05.436221 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-05 00:53:05.436230 | orchestrator | Monday 05 May 2025 00:48:59 +0000 (0:00:00.906) 0:02:55.677 ************ 2025-05-05 00:53:05.436245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-05 00:53:05.436271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-05 00:53:05.436287 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.436304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-05 00:53:05.436320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-05 00:53:05.436336 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.436345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-05 00:53:05.436354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-05 00:53:05.436369 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.436398 | orchestrator | 2025-05-05 00:53:05.436413 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-05 00:53:05.436427 | orchestrator | Monday 05 May 2025 00:48:59 +0000 (0:00:00.928) 0:02:56.605 ************ 2025-05-05 00:53:05.436442 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.436456 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.436470 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.436482 | orchestrator | 2025-05-05 00:53:05.436491 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-05 00:53:05.436500 | orchestrator | Monday 05 May 2025 00:49:01 +0000 (0:00:01.391) 0:02:57.997 ************ 2025-05-05 00:53:05.436508 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.436517 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.436526 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.436534 | orchestrator | 2025-05-05 00:53:05.436543 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-05 00:53:05.436552 | orchestrator | Monday 05 May 2025 00:49:03 +0000 (0:00:02.265) 0:03:00.263 ************ 2025-05-05 00:53:05.436561 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.436569 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.436582 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.436596 | orchestrator | 2025-05-05 00:53:05.436611 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-05 00:53:05.436626 | orchestrator | Monday 05 May 2025 00:49:03 +0000 (0:00:00.310) 0:03:00.573 ************ 2025-05-05 00:53:05.436638 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.436647 | orchestrator | 2025-05-05 00:53:05.436656 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-05 00:53:05.436664 | orchestrator | Monday 05 May 2025 00:49:05 +0000 (0:00:01.315) 0:03:01.889 ************ 2025-05-05 00:53:05.436674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 00:53:05.436689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.436700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 00:53:05.436724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.436734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 00:53:05.436825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.436843 | orchestrator | 2025-05-05 00:53:05.436857 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-05 00:53:05.436870 | orchestrator | Monday 05 May 2025 00:49:09 +0000 (0:00:04.645) 0:03:06.534 ************ 2025-05-05 00:53:05.436886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 00:53:05.437020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437034 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.437044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 00:53:05.437054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437063 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.437072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 00:53:05.437119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437136 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.437146 | orchestrator | 2025-05-05 00:53:05.437154 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-05 00:53:05.437164 | orchestrator | Monday 05 May 2025 00:49:10 +0000 (0:00:01.087) 0:03:07.621 ************ 2025-05-05 00:53:05.437173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-05 00:53:05.437182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-05 00:53:05.437196 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.437204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-05 00:53:05.437213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-05 00:53:05.437222 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.437230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-05 00:53:05.437239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-05 00:53:05.437248 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.437256 | orchestrator | 2025-05-05 00:53:05.437265 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-05 00:53:05.437273 | orchestrator | Monday 05 May 2025 00:49:12 +0000 (0:00:01.217) 0:03:08.839 ************ 2025-05-05 00:53:05.437282 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.437290 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.437299 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.437307 | orchestrator | 2025-05-05 00:53:05.437316 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-05 00:53:05.437324 | orchestrator | Monday 05 May 2025 00:49:13 +0000 (0:00:01.404) 0:03:10.243 ************ 2025-05-05 00:53:05.437333 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.437342 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.437351 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.437359 | orchestrator | 2025-05-05 00:53:05.437368 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-05 00:53:05.437410 | orchestrator | Monday 05 May 2025 00:49:15 +0000 (0:00:02.243) 0:03:12.487 ************ 2025-05-05 00:53:05.437420 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.437428 | orchestrator | 2025-05-05 00:53:05.437437 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-05 00:53:05.437445 | orchestrator | Monday 05 May 2025 00:49:17 +0000 (0:00:01.155) 0:03:13.642 ************ 2025-05-05 00:53:05.437455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-05 00:53:05.437543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-05 00:53:05.437607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-05 00:53:05.437752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437788 | orchestrator | 2025-05-05 00:53:05.437797 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-05 00:53:05.437807 | orchestrator | Monday 05 May 2025 00:49:21 +0000 (0:00:04.163) 0:03:17.806 ************ 2025-05-05 00:53:05.437816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-05 00:53:05.437879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437911 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.437921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-05 00:53:05.437940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.437950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.438006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.438040 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.438052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-05 00:53:05.438061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.438071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.438086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.438095 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.438113 | orchestrator | 2025-05-05 00:53:05.438122 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-05 00:53:05.438131 | orchestrator | Monday 05 May 2025 00:49:22 +0000 (0:00:00.856) 0:03:18.662 ************ 2025-05-05 00:53:05.438140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-05 00:53:05.438149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-05 00:53:05.438158 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.438168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-05 00:53:05.438232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-05 00:53:05.438245 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.438254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-05 00:53:05.438263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-05 00:53:05.438272 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.438280 | orchestrator | 2025-05-05 00:53:05.438289 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-05 00:53:05.438298 | orchestrator | Monday 05 May 2025 00:49:23 +0000 (0:00:01.308) 0:03:19.971 ************ 2025-05-05 00:53:05.438306 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.438315 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.438324 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.438332 | orchestrator | 2025-05-05 00:53:05.438341 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-05 00:53:05.438349 | orchestrator | Monday 05 May 2025 00:49:24 +0000 (0:00:01.456) 0:03:21.427 ************ 2025-05-05 00:53:05.438358 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.438367 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.438421 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.438431 | orchestrator | 2025-05-05 00:53:05.438439 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-05 00:53:05.438448 | orchestrator | Monday 05 May 2025 00:49:27 +0000 (0:00:02.328) 0:03:23.755 ************ 2025-05-05 00:53:05.438463 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.438472 | orchestrator | 2025-05-05 00:53:05.438481 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-05 00:53:05.438490 | orchestrator | Monday 05 May 2025 00:49:28 +0000 (0:00:01.397) 0:03:25.153 ************ 2025-05-05 00:53:05.438499 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:53:05.438508 | orchestrator | 2025-05-05 00:53:05.438517 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-05 00:53:05.438525 | orchestrator | Monday 05 May 2025 00:49:31 +0000 (0:00:03.199) 0:03:28.352 ************ 2025-05-05 00:53:05.438535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-05 00:53:05.438604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-05 00:53:05.438617 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.438627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-05 00:53:05.438646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-05 00:53:05.438656 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.438714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-05 00:53:05.438728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-05 00:53:05.438743 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.438752 | orchestrator | 2025-05-05 00:53:05.438761 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-05 00:53:05.438770 | orchestrator | Monday 05 May 2025 00:49:34 +0000 (0:00:03.037) 0:03:31.390 ************ 2025-05-05 00:53:05.438779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-05 00:53:05.438789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-05 00:53:05.438798 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.438854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-05 00:53:05.438873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-05 00:53:05.438883 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.438892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-05 00:53:05.438947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-05 00:53:05.438964 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.438972 | orchestrator | 2025-05-05 00:53:05.438980 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-05 00:53:05.438988 | orchestrator | Monday 05 May 2025 00:49:39 +0000 (0:00:04.293) 0:03:35.684 ************ 2025-05-05 00:53:05.438996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-05 00:53:05.439005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-05 00:53:05.439013 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-05 00:53:05.439030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-05 00:53:05.439038 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-05 00:53:05.439099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-05 00:53:05.439116 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439124 | orchestrator | 2025-05-05 00:53:05.439133 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-05 00:53:05.439141 | orchestrator | Monday 05 May 2025 00:49:42 +0000 (0:00:03.770) 0:03:39.455 ************ 2025-05-05 00:53:05.439149 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.439157 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.439165 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.439173 | orchestrator | 2025-05-05 00:53:05.439181 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-05 00:53:05.439190 | orchestrator | Monday 05 May 2025 00:49:44 +0000 (0:00:01.806) 0:03:41.261 ************ 2025-05-05 00:53:05.439198 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439206 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439214 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439222 | orchestrator | 2025-05-05 00:53:05.439230 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-05 00:53:05.439238 | orchestrator | Monday 05 May 2025 00:49:46 +0000 (0:00:01.476) 0:03:42.738 ************ 2025-05-05 00:53:05.439246 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439255 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439263 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439271 | orchestrator | 2025-05-05 00:53:05.439279 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-05 00:53:05.439287 | orchestrator | Monday 05 May 2025 00:49:46 +0000 (0:00:00.246) 0:03:42.985 ************ 2025-05-05 00:53:05.439295 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.439303 | orchestrator | 2025-05-05 00:53:05.439311 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-05 00:53:05.439319 | orchestrator | Monday 05 May 2025 00:49:47 +0000 (0:00:01.204) 0:03:44.189 ************ 2025-05-05 00:53:05.439328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-05 00:53:05.439337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-05 00:53:05.439345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-05 00:53:05.439359 | orchestrator | 2025-05-05 00:53:05.439367 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-05 00:53:05.439434 | orchestrator | Monday 05 May 2025 00:49:49 +0000 (0:00:01.495) 0:03:45.685 ************ 2025-05-05 00:53:05.439446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-05 00:53:05.439455 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-05 00:53:05.439472 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-05 00:53:05.439490 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439498 | orchestrator | 2025-05-05 00:53:05.439506 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-05 00:53:05.439514 | orchestrator | Monday 05 May 2025 00:49:49 +0000 (0:00:00.447) 0:03:46.133 ************ 2025-05-05 00:53:05.439522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-05 00:53:05.439530 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-05 00:53:05.439556 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-05 00:53:05.439573 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439581 | orchestrator | 2025-05-05 00:53:05.439589 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-05 00:53:05.439597 | orchestrator | Monday 05 May 2025 00:49:50 +0000 (0:00:00.700) 0:03:46.833 ************ 2025-05-05 00:53:05.439605 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439613 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439621 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439629 | orchestrator | 2025-05-05 00:53:05.439637 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-05 00:53:05.439645 | orchestrator | Monday 05 May 2025 00:49:50 +0000 (0:00:00.547) 0:03:47.381 ************ 2025-05-05 00:53:05.439653 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439661 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439709 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439721 | orchestrator | 2025-05-05 00:53:05.439729 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-05 00:53:05.439737 | orchestrator | Monday 05 May 2025 00:49:52 +0000 (0:00:01.894) 0:03:49.275 ************ 2025-05-05 00:53:05.439745 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.439753 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.439761 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.439769 | orchestrator | 2025-05-05 00:53:05.439777 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-05 00:53:05.439785 | orchestrator | Monday 05 May 2025 00:49:52 +0000 (0:00:00.331) 0:03:49.607 ************ 2025-05-05 00:53:05.439793 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.439801 | orchestrator | 2025-05-05 00:53:05.439809 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-05 00:53:05.439817 | orchestrator | Monday 05 May 2025 00:49:54 +0000 (0:00:01.833) 0:03:51.440 ************ 2025-05-05 00:53:05.439826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 00:53:05.439835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.439850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.439859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.439912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 00:53:05.439923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.439932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.439943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.439957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.439966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.440016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.440039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.440053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 00:53:05.440113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.440191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.440221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 00:53:05.440365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.440401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.440427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.440452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.440549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.440562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 00:53:05.440613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.440628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.440724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 00:53:05.440846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.440868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.440889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.440908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.440916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.440969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.440982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.441032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.441047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441062 | orchestrator | 2025-05-05 00:53:05.441071 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-05 00:53:05.441080 | orchestrator | Monday 05 May 2025 00:50:00 +0000 (0:00:05.315) 0:03:56.755 ************ 2025-05-05 00:53:05.441150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 00:53:05.441168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 00:53:05.441239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.441340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.441395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.441426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 00:53:05.441439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.441562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.441592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.441744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 00:53:05.441758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.441771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.441810 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.441895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.441938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.441969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.441982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.441996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.442123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.442186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.442199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442212 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.442225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 00:53:05.442248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 00:53:05.442452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.442481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.442601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.442634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.442657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 00:53:05.442670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 00:53:05.442800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 00:53:05.442813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.442825 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.442838 | orchestrator | 2025-05-05 00:53:05.442851 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-05 00:53:05.442867 | orchestrator | Monday 05 May 2025 00:50:02 +0000 (0:00:01.937) 0:03:58.693 ************ 2025-05-05 00:53:05.442880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-05 00:53:05.442892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-05 00:53:05.442904 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.442921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-05 00:53:05.442932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-05 00:53:05.442953 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.442965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-05 00:53:05.442977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-05 00:53:05.442990 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.443001 | orchestrator | 2025-05-05 00:53:05.443012 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-05 00:53:05.443023 | orchestrator | Monday 05 May 2025 00:50:04 +0000 (0:00:01.981) 0:04:00.674 ************ 2025-05-05 00:53:05.443035 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.443047 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.443063 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.443075 | orchestrator | 2025-05-05 00:53:05.443083 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-05 00:53:05.443090 | orchestrator | Monday 05 May 2025 00:50:05 +0000 (0:00:01.403) 0:04:02.078 ************ 2025-05-05 00:53:05.443097 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.443104 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.443111 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.443118 | orchestrator | 2025-05-05 00:53:05.443125 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-05 00:53:05.443133 | orchestrator | Monday 05 May 2025 00:50:07 +0000 (0:00:02.530) 0:04:04.609 ************ 2025-05-05 00:53:05.443174 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.443183 | orchestrator | 2025-05-05 00:53:05.443190 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-05 00:53:05.443197 | orchestrator | Monday 05 May 2025 00:50:09 +0000 (0:00:01.586) 0:04:06.196 ************ 2025-05-05 00:53:05.443205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.443223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.443231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.443244 | orchestrator | 2025-05-05 00:53:05.443251 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-05 00:53:05.443259 | orchestrator | Monday 05 May 2025 00:50:13 +0000 (0:00:04.384) 0:04:10.580 ************ 2025-05-05 00:53:05.443266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.443290 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.443298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.443306 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.443319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.443331 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.443339 | orchestrator | 2025-05-05 00:53:05.443346 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-05 00:53:05.443353 | orchestrator | Monday 05 May 2025 00:50:14 +0000 (0:00:00.418) 0:04:10.998 ************ 2025-05-05 00:53:05.443360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443398 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.443407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443424 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.443432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443449 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.443457 | orchestrator | 2025-05-05 00:53:05.443465 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-05 00:53:05.443473 | orchestrator | Monday 05 May 2025 00:50:15 +0000 (0:00:00.920) 0:04:11.919 ************ 2025-05-05 00:53:05.443481 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.443490 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.443497 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.443505 | orchestrator | 2025-05-05 00:53:05.443512 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-05 00:53:05.443519 | orchestrator | Monday 05 May 2025 00:50:16 +0000 (0:00:01.123) 0:04:13.043 ************ 2025-05-05 00:53:05.443526 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.443533 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.443558 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.443567 | orchestrator | 2025-05-05 00:53:05.443575 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-05 00:53:05.443582 | orchestrator | Monday 05 May 2025 00:50:18 +0000 (0:00:02.179) 0:04:15.222 ************ 2025-05-05 00:53:05.443590 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.443597 | orchestrator | 2025-05-05 00:53:05.443605 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-05 00:53:05.443612 | orchestrator | Monday 05 May 2025 00:50:20 +0000 (0:00:01.673) 0:04:16.896 ************ 2025-05-05 00:53:05.443620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.443634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.443685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.443714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443730 | orchestrator | 2025-05-05 00:53:05.443738 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-05 00:53:05.443746 | orchestrator | Monday 05 May 2025 00:50:25 +0000 (0:00:04.826) 0:04:21.723 ************ 2025-05-05 00:53:05.443776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.443794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443810 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.443818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.443847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443869 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.443877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.443886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.443901 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.443909 | orchestrator | 2025-05-05 00:53:05.443916 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-05 00:53:05.443924 | orchestrator | Monday 05 May 2025 00:50:25 +0000 (0:00:00.770) 0:04:22.494 ************ 2025-05-05 00:53:05.443931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-05 00:53:05.443999 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.444011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444041 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.444048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-05 00:53:05.444077 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.444084 | orchestrator | 2025-05-05 00:53:05.444091 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-05 00:53:05.444098 | orchestrator | Monday 05 May 2025 00:50:26 +0000 (0:00:01.011) 0:04:23.505 ************ 2025-05-05 00:53:05.444105 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.444112 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.444119 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.444126 | orchestrator | 2025-05-05 00:53:05.444134 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-05 00:53:05.444141 | orchestrator | Monday 05 May 2025 00:50:28 +0000 (0:00:01.322) 0:04:24.828 ************ 2025-05-05 00:53:05.444148 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.444155 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.444162 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.444168 | orchestrator | 2025-05-05 00:53:05.444176 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-05 00:53:05.444182 | orchestrator | Monday 05 May 2025 00:50:30 +0000 (0:00:02.276) 0:04:27.104 ************ 2025-05-05 00:53:05.444190 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.444196 | orchestrator | 2025-05-05 00:53:05.444207 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-05 00:53:05.444214 | orchestrator | Monday 05 May 2025 00:50:32 +0000 (0:00:01.779) 0:04:28.884 ************ 2025-05-05 00:53:05.444221 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-05 00:53:05.444230 | orchestrator | 2025-05-05 00:53:05.444237 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-05 00:53:05.444244 | orchestrator | Monday 05 May 2025 00:50:33 +0000 (0:00:01.194) 0:04:30.079 ************ 2025-05-05 00:53:05.444256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-05 00:53:05.444291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-05 00:53:05.444300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-05 00:53:05.444308 | orchestrator | 2025-05-05 00:53:05.444315 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-05 00:53:05.444322 | orchestrator | Monday 05 May 2025 00:50:38 +0000 (0:00:04.857) 0:04:34.936 ************ 2025-05-05 00:53:05.444329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444336 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.444344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444351 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.444358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444365 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.444389 | orchestrator | 2025-05-05 00:53:05.444401 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-05 00:53:05.444409 | orchestrator | Monday 05 May 2025 00:50:40 +0000 (0:00:01.742) 0:04:36.678 ************ 2025-05-05 00:53:05.444416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-05 00:53:05.444429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-05 00:53:05.444438 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.444445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-05 00:53:05.444455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-05 00:53:05.444462 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.444470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-05 00:53:05.444495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-05 00:53:05.444503 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.444511 | orchestrator | 2025-05-05 00:53:05.444518 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-05 00:53:05.444525 | orchestrator | Monday 05 May 2025 00:50:41 +0000 (0:00:01.723) 0:04:38.402 ************ 2025-05-05 00:53:05.444532 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.444539 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.444546 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.444553 | orchestrator | 2025-05-05 00:53:05.444560 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-05 00:53:05.444567 | orchestrator | Monday 05 May 2025 00:50:44 +0000 (0:00:02.833) 0:04:41.235 ************ 2025-05-05 00:53:05.444574 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.444581 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.444588 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.444595 | orchestrator | 2025-05-05 00:53:05.444601 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-05 00:53:05.444609 | orchestrator | Monday 05 May 2025 00:50:48 +0000 (0:00:03.394) 0:04:44.630 ************ 2025-05-05 00:53:05.444616 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-05 00:53:05.444623 | orchestrator | 2025-05-05 00:53:05.444630 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-05 00:53:05.444637 | orchestrator | Monday 05 May 2025 00:50:49 +0000 (0:00:01.300) 0:04:45.931 ************ 2025-05-05 00:53:05.444645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444652 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.444659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444671 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.444678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444686 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.444693 | orchestrator | 2025-05-05 00:53:05.444700 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-05 00:53:05.444707 | orchestrator | Monday 05 May 2025 00:50:51 +0000 (0:00:01.721) 0:04:47.652 ************ 2025-05-05 00:53:05.444721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444728 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.444752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444761 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.444768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-05 00:53:05.444775 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.444782 | orchestrator | 2025-05-05 00:53:05.444790 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-05 00:53:05.444797 | orchestrator | Monday 05 May 2025 00:50:52 +0000 (0:00:01.637) 0:04:49.290 ************ 2025-05-05 00:53:05.444804 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.444811 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.444818 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.444825 | orchestrator | 2025-05-05 00:53:05.444832 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-05 00:53:05.444840 | orchestrator | Monday 05 May 2025 00:50:54 +0000 (0:00:01.878) 0:04:51.168 ************ 2025-05-05 00:53:05.444847 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.444861 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.444872 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.444879 | orchestrator | 2025-05-05 00:53:05.444886 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-05 00:53:05.444893 | orchestrator | Monday 05 May 2025 00:50:57 +0000 (0:00:02.778) 0:04:53.947 ************ 2025-05-05 00:53:05.444900 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.444907 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.444914 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.444921 | orchestrator | 2025-05-05 00:53:05.444928 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-05 00:53:05.444935 | orchestrator | Monday 05 May 2025 00:51:00 +0000 (0:00:03.338) 0:04:57.285 ************ 2025-05-05 00:53:05.444943 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-05 00:53:05.444950 | orchestrator | 2025-05-05 00:53:05.444962 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-05 00:53:05.444975 | orchestrator | Monday 05 May 2025 00:51:02 +0000 (0:00:01.482) 0:04:58.768 ************ 2025-05-05 00:53:05.444988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-05 00:53:05.445001 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.445014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-05 00:53:05.445027 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.445037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-05 00:53:05.445044 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.445051 | orchestrator | 2025-05-05 00:53:05.445058 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-05 00:53:05.445066 | orchestrator | Monday 05 May 2025 00:51:04 +0000 (0:00:01.873) 0:05:00.641 ************ 2025-05-05 00:53:05.445093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-05 00:53:05.445101 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.445116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-05 00:53:05.445129 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.445137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-05 00:53:05.445144 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.445151 | orchestrator | 2025-05-05 00:53:05.445158 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-05 00:53:05.445165 | orchestrator | Monday 05 May 2025 00:51:05 +0000 (0:00:01.478) 0:05:02.120 ************ 2025-05-05 00:53:05.445173 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.445180 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.445187 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.445194 | orchestrator | 2025-05-05 00:53:05.445201 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-05 00:53:05.445208 | orchestrator | Monday 05 May 2025 00:51:07 +0000 (0:00:02.191) 0:05:04.312 ************ 2025-05-05 00:53:05.445215 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.445221 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.445228 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.445236 | orchestrator | 2025-05-05 00:53:05.445243 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-05 00:53:05.445254 | orchestrator | Monday 05 May 2025 00:51:10 +0000 (0:00:03.148) 0:05:07.460 ************ 2025-05-05 00:53:05.445261 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.445268 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.445276 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.445282 | orchestrator | 2025-05-05 00:53:05.445290 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-05 00:53:05.445297 | orchestrator | Monday 05 May 2025 00:51:14 +0000 (0:00:03.480) 0:05:10.941 ************ 2025-05-05 00:53:05.445304 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.445311 | orchestrator | 2025-05-05 00:53:05.445318 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-05 00:53:05.445325 | orchestrator | Monday 05 May 2025 00:51:16 +0000 (0:00:01.713) 0:05:12.655 ************ 2025-05-05 00:53:05.445332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.445364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-05 00:53:05.445389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.445420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.445427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-05 00:53:05.445435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.445480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-05 00:53:05.445495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.445508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.445553 | orchestrator | 2025-05-05 00:53:05.445560 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-05 00:53:05.445568 | orchestrator | Monday 05 May 2025 00:51:20 +0000 (0:00:04.581) 0:05:17.236 ************ 2025-05-05 00:53:05.445575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.445582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-05 00:53:05.445590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.445616 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.445645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.445654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-05 00:53:05.445662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.445689 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.445716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.445725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-05 00:53:05.445733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-05 00:53:05.445748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-05 00:53:05.445755 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.445762 | orchestrator | 2025-05-05 00:53:05.445770 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-05 00:53:05.445777 | orchestrator | Monday 05 May 2025 00:51:21 +0000 (0:00:00.963) 0:05:18.200 ************ 2025-05-05 00:53:05.445789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-05 00:53:05.445796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-05 00:53:05.445804 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.445811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-05 00:53:05.445819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-05 00:53:05.445826 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.445833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-05 00:53:05.445841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-05 00:53:05.445862 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.445870 | orchestrator | 2025-05-05 00:53:05.445877 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-05 00:53:05.445884 | orchestrator | Monday 05 May 2025 00:51:22 +0000 (0:00:01.329) 0:05:19.529 ************ 2025-05-05 00:53:05.445892 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.445898 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.445905 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.445912 | orchestrator | 2025-05-05 00:53:05.445920 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-05 00:53:05.445927 | orchestrator | Monday 05 May 2025 00:51:24 +0000 (0:00:01.465) 0:05:20.994 ************ 2025-05-05 00:53:05.445934 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.445952 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.445964 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.445985 | orchestrator | 2025-05-05 00:53:05.445996 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-05 00:53:05.446008 | orchestrator | Monday 05 May 2025 00:51:27 +0000 (0:00:02.707) 0:05:23.701 ************ 2025-05-05 00:53:05.446057 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.446065 | orchestrator | 2025-05-05 00:53:05.446072 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-05 00:53:05.446080 | orchestrator | Monday 05 May 2025 00:51:28 +0000 (0:00:01.716) 0:05:25.418 ************ 2025-05-05 00:53:05.446097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:53:05.446113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:53:05.446120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:53:05.446158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:53:05.446168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:53:05.446176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:53:05.446188 | orchestrator | 2025-05-05 00:53:05.446196 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-05 00:53:05.446203 | orchestrator | Monday 05 May 2025 00:51:35 +0000 (0:00:07.004) 0:05:32.423 ************ 2025-05-05 00:53:05.446211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:53:05.446243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:53:05.446252 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.446260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:53:05.446272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:53:05.446282 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.446289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:53:05.446316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:53:05.446324 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.446332 | orchestrator | 2025-05-05 00:53:05.446339 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-05 00:53:05.446346 | orchestrator | Monday 05 May 2025 00:51:36 +0000 (0:00:01.001) 0:05:33.425 ************ 2025-05-05 00:53:05.446353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-05 00:53:05.446361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-05 00:53:05.446368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-05 00:53:05.446422 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.446431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-05 00:53:05.446438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-05 00:53:05.446446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-05 00:53:05.446453 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.446464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-05 00:53:05.446472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-05 00:53:05.446479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-05 00:53:05.446487 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.446494 | orchestrator | 2025-05-05 00:53:05.446500 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-05 00:53:05.446507 | orchestrator | Monday 05 May 2025 00:51:38 +0000 (0:00:01.430) 0:05:34.855 ************ 2025-05-05 00:53:05.446513 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.446519 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.446526 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.446532 | orchestrator | 2025-05-05 00:53:05.446538 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-05 00:53:05.446544 | orchestrator | Monday 05 May 2025 00:51:38 +0000 (0:00:00.450) 0:05:35.306 ************ 2025-05-05 00:53:05.446550 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.446557 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.446563 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.446569 | orchestrator | 2025-05-05 00:53:05.446575 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-05 00:53:05.446582 | orchestrator | Monday 05 May 2025 00:51:40 +0000 (0:00:01.839) 0:05:37.145 ************ 2025-05-05 00:53:05.446588 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.446594 | orchestrator | 2025-05-05 00:53:05.446601 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-05 00:53:05.446607 | orchestrator | Monday 05 May 2025 00:51:42 +0000 (0:00:01.768) 0:05:38.914 ************ 2025-05-05 00:53:05.446636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-05 00:53:05.446649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 00:53:05.446656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.446682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-05 00:53:05.446689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 00:53:05.446710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.446736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-05 00:53:05.446743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 00:53:05.446755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.446796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-05 00:53:05.446809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 00:53:05.446816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.446849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-05 00:53:05.446874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 00:53:05.446880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.446906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-05 00:53:05.446927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 00:53:05.446934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.446962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.446974 | orchestrator | 2025-05-05 00:53:05.446993 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-05 00:53:05.447004 | orchestrator | Monday 05 May 2025 00:51:47 +0000 (0:00:05.014) 0:05:43.929 ************ 2025-05-05 00:53:05.447015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 00:53:05.447026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 00:53:05.447034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.447060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 00:53:05.447076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 00:53:05.447083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 00:53:05.447108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.447118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 00:53:05.447127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447140 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.447165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 00:53:05.447176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 00:53:05.447185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.447210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447216 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 00:53:05.447233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 00:53:05.447240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.447268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 00:53:05.447274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 00:53:05.447285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 00:53:05.447312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 00:53:05.447319 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447326 | orchestrator | 2025-05-05 00:53:05.447332 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-05 00:53:05.447338 | orchestrator | Monday 05 May 2025 00:51:48 +0000 (0:00:01.681) 0:05:45.611 ************ 2025-05-05 00:53:05.447345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-05 00:53:05.447351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-05 00:53:05.447358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-05 00:53:05.447365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-05 00:53:05.447386 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-05 00:53:05.447402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-05 00:53:05.447416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-05 00:53:05.447426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-05 00:53:05.447433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-05 00:53:05.447440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-05 00:53:05.447446 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-05 00:53:05.447459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-05 00:53:05.447466 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447472 | orchestrator | 2025-05-05 00:53:05.447478 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-05 00:53:05.447485 | orchestrator | Monday 05 May 2025 00:51:50 +0000 (0:00:01.594) 0:05:47.205 ************ 2025-05-05 00:53:05.447494 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447500 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447512 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447518 | orchestrator | 2025-05-05 00:53:05.447525 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-05 00:53:05.447531 | orchestrator | Monday 05 May 2025 00:51:51 +0000 (0:00:00.830) 0:05:48.035 ************ 2025-05-05 00:53:05.447537 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447543 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447549 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447556 | orchestrator | 2025-05-05 00:53:05.447562 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-05 00:53:05.447568 | orchestrator | Monday 05 May 2025 00:51:53 +0000 (0:00:02.441) 0:05:50.477 ************ 2025-05-05 00:53:05.447574 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.447580 | orchestrator | 2025-05-05 00:53:05.447586 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-05 00:53:05.447593 | orchestrator | Monday 05 May 2025 00:51:56 +0000 (0:00:02.231) 0:05:52.709 ************ 2025-05-05 00:53:05.447599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:53:05.447611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:53:05.447618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-05 00:53:05.447630 | orchestrator | 2025-05-05 00:53:05.447636 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-05 00:53:05.447642 | orchestrator | Monday 05 May 2025 00:51:58 +0000 (0:00:02.790) 0:05:55.499 ************ 2025-05-05 00:53:05.447652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-05 00:53:05.447659 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-05 00:53:05.447676 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-05 00:53:05.447689 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447695 | orchestrator | 2025-05-05 00:53:05.447701 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-05 00:53:05.447708 | orchestrator | Monday 05 May 2025 00:51:59 +0000 (0:00:00.651) 0:05:56.150 ************ 2025-05-05 00:53:05.447714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-05 00:53:05.447721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-05 00:53:05.447727 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447734 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-05 00:53:05.447746 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447818 | orchestrator | 2025-05-05 00:53:05.447825 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-05 00:53:05.447832 | orchestrator | Monday 05 May 2025 00:52:00 +0000 (0:00:01.064) 0:05:57.215 ************ 2025-05-05 00:53:05.447838 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447844 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447850 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447857 | orchestrator | 2025-05-05 00:53:05.447863 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-05 00:53:05.447872 | orchestrator | Monday 05 May 2025 00:52:01 +0000 (0:00:00.436) 0:05:57.651 ************ 2025-05-05 00:53:05.447878 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.447885 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.447891 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.447897 | orchestrator | 2025-05-05 00:53:05.447903 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-05 00:53:05.447913 | orchestrator | Monday 05 May 2025 00:52:02 +0000 (0:00:01.685) 0:05:59.337 ************ 2025-05-05 00:53:05.447920 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:53:05.447926 | orchestrator | 2025-05-05 00:53:05.447932 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-05 00:53:05.447938 | orchestrator | Monday 05 May 2025 00:52:04 +0000 (0:00:01.864) 0:06:01.202 ************ 2025-05-05 00:53:05.447945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.447952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.447963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.447978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.447994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.448005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-05 00:53:05.448016 | orchestrator | 2025-05-05 00:53:05.448026 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-05 00:53:05.448036 | orchestrator | Monday 05 May 2025 00:52:12 +0000 (0:00:07.558) 0:06:08.760 ************ 2025-05-05 00:53:05.448046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.448059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.448071 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.448084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.448091 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.448104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-05 00:53:05.448115 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448122 | orchestrator | 2025-05-05 00:53:05.448128 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-05 00:53:05.448137 | orchestrator | Monday 05 May 2025 00:52:13 +0000 (0:00:01.170) 0:06:09.931 ************ 2025-05-05 00:53:05.448144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448169 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448202 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-05 00:53:05.448233 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448239 | orchestrator | 2025-05-05 00:53:05.448245 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-05 00:53:05.448251 | orchestrator | Monday 05 May 2025 00:52:14 +0000 (0:00:01.588) 0:06:11.520 ************ 2025-05-05 00:53:05.448262 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.448268 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.448275 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.448281 | orchestrator | 2025-05-05 00:53:05.448287 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-05 00:53:05.448297 | orchestrator | Monday 05 May 2025 00:52:16 +0000 (0:00:01.592) 0:06:13.112 ************ 2025-05-05 00:53:05.448303 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.448309 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.448315 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.448322 | orchestrator | 2025-05-05 00:53:05.448328 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-05 00:53:05.448334 | orchestrator | Monday 05 May 2025 00:52:19 +0000 (0:00:02.576) 0:06:15.689 ************ 2025-05-05 00:53:05.448340 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448347 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448353 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448359 | orchestrator | 2025-05-05 00:53:05.448365 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-05 00:53:05.448387 | orchestrator | Monday 05 May 2025 00:52:19 +0000 (0:00:00.312) 0:06:16.001 ************ 2025-05-05 00:53:05.448394 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448401 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448407 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448413 | orchestrator | 2025-05-05 00:53:05.448420 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-05 00:53:05.448429 | orchestrator | Monday 05 May 2025 00:52:19 +0000 (0:00:00.580) 0:06:16.582 ************ 2025-05-05 00:53:05.448435 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448442 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448448 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448454 | orchestrator | 2025-05-05 00:53:05.448460 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-05 00:53:05.448466 | orchestrator | Monday 05 May 2025 00:52:20 +0000 (0:00:00.810) 0:06:17.392 ************ 2025-05-05 00:53:05.448473 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448479 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448485 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448491 | orchestrator | 2025-05-05 00:53:05.448497 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-05 00:53:05.448504 | orchestrator | Monday 05 May 2025 00:52:21 +0000 (0:00:00.577) 0:06:17.969 ************ 2025-05-05 00:53:05.448510 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448516 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448522 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448528 | orchestrator | 2025-05-05 00:53:05.448535 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-05 00:53:05.448541 | orchestrator | Monday 05 May 2025 00:52:21 +0000 (0:00:00.314) 0:06:18.284 ************ 2025-05-05 00:53:05.448547 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448553 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448560 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.448566 | orchestrator | 2025-05-05 00:53:05.448572 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-05 00:53:05.448578 | orchestrator | Monday 05 May 2025 00:52:22 +0000 (0:00:01.015) 0:06:19.299 ************ 2025-05-05 00:53:05.448584 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.448591 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.448597 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.448603 | orchestrator | 2025-05-05 00:53:05.448609 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-05 00:53:05.448615 | orchestrator | Monday 05 May 2025 00:52:23 +0000 (0:00:00.883) 0:06:20.182 ************ 2025-05-05 00:53:05.448622 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.448632 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.448639 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.448649 | orchestrator | 2025-05-05 00:53:05.448656 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-05 00:53:05.448662 | orchestrator | Monday 05 May 2025 00:52:23 +0000 (0:00:00.323) 0:06:20.506 ************ 2025-05-05 00:53:05.448668 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.448675 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.448681 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.448687 | orchestrator | 2025-05-05 00:53:05.448694 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-05 00:53:05.448700 | orchestrator | Monday 05 May 2025 00:52:25 +0000 (0:00:01.305) 0:06:21.812 ************ 2025-05-05 00:53:05.448706 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.448712 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.448718 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.448725 | orchestrator | 2025-05-05 00:53:05.448731 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-05 00:53:05.448737 | orchestrator | Monday 05 May 2025 00:52:26 +0000 (0:00:01.237) 0:06:23.049 ************ 2025-05-05 00:53:05.448743 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.448749 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.448755 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.448761 | orchestrator | 2025-05-05 00:53:05.448767 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-05 00:53:05.448774 | orchestrator | Monday 05 May 2025 00:52:27 +0000 (0:00:01.054) 0:06:24.103 ************ 2025-05-05 00:53:05.448780 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.448786 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.448792 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.448798 | orchestrator | 2025-05-05 00:53:05.448804 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-05 00:53:05.448811 | orchestrator | Monday 05 May 2025 00:52:36 +0000 (0:00:09.070) 0:06:33.174 ************ 2025-05-05 00:53:05.448817 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.448823 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.448829 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.448835 | orchestrator | 2025-05-05 00:53:05.448842 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-05 00:53:05.448848 | orchestrator | Monday 05 May 2025 00:52:37 +0000 (0:00:01.190) 0:06:34.364 ************ 2025-05-05 00:53:05.448854 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.448860 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.448866 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.448872 | orchestrator | 2025-05-05 00:53:05.448878 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-05 00:53:05.448885 | orchestrator | Monday 05 May 2025 00:52:44 +0000 (0:00:07.067) 0:06:41.432 ************ 2025-05-05 00:53:05.448891 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.448897 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.448903 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.448909 | orchestrator | 2025-05-05 00:53:05.448915 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-05 00:53:05.448925 | orchestrator | Monday 05 May 2025 00:52:48 +0000 (0:00:03.805) 0:06:45.238 ************ 2025-05-05 00:53:05.448931 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:53:05.448938 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:53:05.448944 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:53:05.448950 | orchestrator | 2025-05-05 00:53:05.448959 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-05 00:53:05.448969 | orchestrator | Monday 05 May 2025 00:52:57 +0000 (0:00:09.111) 0:06:54.349 ************ 2025-05-05 00:53:05.448979 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.448988 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.448998 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.449013 | orchestrator | 2025-05-05 00:53:05.449024 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-05 00:53:05.449033 | orchestrator | Monday 05 May 2025 00:52:58 +0000 (0:00:00.617) 0:06:54.967 ************ 2025-05-05 00:53:05.449043 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.449053 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.449059 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.449066 | orchestrator | 2025-05-05 00:53:05.449072 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-05 00:53:05.449078 | orchestrator | Monday 05 May 2025 00:52:58 +0000 (0:00:00.622) 0:06:55.590 ************ 2025-05-05 00:53:05.449084 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.449091 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.449097 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.449103 | orchestrator | 2025-05-05 00:53:05.449109 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-05 00:53:05.449115 | orchestrator | Monday 05 May 2025 00:52:59 +0000 (0:00:00.358) 0:06:55.948 ************ 2025-05-05 00:53:05.449122 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.449128 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.449134 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.449141 | orchestrator | 2025-05-05 00:53:05.449147 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-05 00:53:05.449153 | orchestrator | Monday 05 May 2025 00:52:59 +0000 (0:00:00.629) 0:06:56.578 ************ 2025-05-05 00:53:05.449159 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.449165 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.449171 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.449178 | orchestrator | 2025-05-05 00:53:05.449184 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-05 00:53:05.449190 | orchestrator | Monday 05 May 2025 00:53:00 +0000 (0:00:00.601) 0:06:57.180 ************ 2025-05-05 00:53:05.449196 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:53:05.449202 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:53:05.449208 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:53:05.449214 | orchestrator | 2025-05-05 00:53:05.449221 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-05 00:53:05.449227 | orchestrator | Monday 05 May 2025 00:53:01 +0000 (0:00:00.641) 0:06:57.821 ************ 2025-05-05 00:53:05.449233 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.449239 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.449245 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.449251 | orchestrator | 2025-05-05 00:53:05.449257 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-05 00:53:05.449264 | orchestrator | Monday 05 May 2025 00:53:02 +0000 (0:00:01.070) 0:06:58.892 ************ 2025-05-05 00:53:05.449270 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:53:05.449276 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:53:05.449282 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:53:05.449288 | orchestrator | 2025-05-05 00:53:05.449294 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:53:05.449301 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-05 00:53:05.449307 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-05 00:53:05.449314 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-05 00:53:05.449320 | orchestrator | 2025-05-05 00:53:05.449326 | orchestrator | 2025-05-05 00:53:05.449332 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:53:05.449338 | orchestrator | Monday 05 May 2025 00:53:03 +0000 (0:00:01.606) 0:07:00.498 ************ 2025-05-05 00:53:05.449351 | orchestrator | =============================================================================== 2025-05-05 00:53:05.449357 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.11s 2025-05-05 00:53:05.449364 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.07s 2025-05-05 00:53:05.449370 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.56s 2025-05-05 00:53:05.449389 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.07s 2025-05-05 00:53:05.449395 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.00s 2025-05-05 00:53:05.449401 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 6.72s 2025-05-05 00:53:05.449407 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.32s 2025-05-05 00:53:05.449414 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.31s 2025-05-05 00:53:05.449420 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.16s 2025-05-05 00:53:05.449426 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.01s 2025-05-05 00:53:05.449432 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.86s 2025-05-05 00:53:05.449441 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.83s 2025-05-05 00:53:05.449448 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.65s 2025-05-05 00:53:05.449454 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.58s 2025-05-05 00:53:05.449460 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.38s 2025-05-05 00:53:05.449466 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.35s 2025-05-05 00:53:05.449472 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.32s 2025-05-05 00:53:05.449478 | orchestrator | haproxy-config : Add configuration for mariadb when using single external frontend --- 4.29s 2025-05-05 00:53:05.449485 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.28s 2025-05-05 00:53:05.449491 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.16s 2025-05-05 00:53:05.449500 | orchestrator | 2025-05-05 00:53:05 | INFO  | Task 6dac9070-b623-40e4-a741-ce4498a2c2af is in state STARTED 2025-05-05 00:53:08.466215 | orchestrator | 2025-05-05 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:08.466365 | orchestrator | 2025-05-05 00:53:08 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:08.468835 | orchestrator | 2025-05-05 00:53:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:08.479539 | orchestrator | 2025-05-05 00:53:08 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:11.509415 | orchestrator | 2025-05-05 00:53:08 | INFO  | Task 6dac9070-b623-40e4-a741-ce4498a2c2af is in state STARTED 2025-05-05 00:53:11.509514 | orchestrator | 2025-05-05 00:53:08 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:11.509534 | orchestrator | 2025-05-05 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:11.509564 | orchestrator | 2025-05-05 00:53:11 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:11.510667 | orchestrator | 2025-05-05 00:53:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:11.512454 | orchestrator | 2025-05-05 00:53:11 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:11.513099 | orchestrator | 2025-05-05 00:53:11 | INFO  | Task 6dac9070-b623-40e4-a741-ce4498a2c2af is in state STARTED 2025-05-05 00:53:11.514740 | orchestrator | 2025-05-05 00:53:11 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:14.545043 | orchestrator | 2025-05-05 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:14.545184 | orchestrator | 2025-05-05 00:53:14 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:14.550891 | orchestrator | 2025-05-05 00:53:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:14.552790 | orchestrator | 2025-05-05 00:53:14 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:14.553107 | orchestrator | 2025-05-05 00:53:14 | INFO  | Task 6dac9070-b623-40e4-a741-ce4498a2c2af is in state SUCCESS 2025-05-05 00:53:14.553899 | orchestrator | 2025-05-05 00:53:14 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:17.596246 | orchestrator | 2025-05-05 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:17.596429 | orchestrator | 2025-05-05 00:53:17 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:17.596823 | orchestrator | 2025-05-05 00:53:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:17.597478 | orchestrator | 2025-05-05 00:53:17 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:17.598786 | orchestrator | 2025-05-05 00:53:17 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:17.598931 | orchestrator | 2025-05-05 00:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:20.655018 | orchestrator | 2025-05-05 00:53:20 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:20.655862 | orchestrator | 2025-05-05 00:53:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:20.657033 | orchestrator | 2025-05-05 00:53:20 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:20.659524 | orchestrator | 2025-05-05 00:53:20 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:23.703764 | orchestrator | 2025-05-05 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:23.703911 | orchestrator | 2025-05-05 00:53:23 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:26.732249 | orchestrator | 2025-05-05 00:53:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:26.732394 | orchestrator | 2025-05-05 00:53:23 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:26.732416 | orchestrator | 2025-05-05 00:53:23 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:26.732433 | orchestrator | 2025-05-05 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:26.732463 | orchestrator | 2025-05-05 00:53:26 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:26.733018 | orchestrator | 2025-05-05 00:53:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:26.733054 | orchestrator | 2025-05-05 00:53:26 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:26.733599 | orchestrator | 2025-05-05 00:53:26 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:29.765146 | orchestrator | 2025-05-05 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:29.765298 | orchestrator | 2025-05-05 00:53:29 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:29.765776 | orchestrator | 2025-05-05 00:53:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:29.765812 | orchestrator | 2025-05-05 00:53:29 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:29.766538 | orchestrator | 2025-05-05 00:53:29 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:32.802580 | orchestrator | 2025-05-05 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:32.802719 | orchestrator | 2025-05-05 00:53:32 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:32.803399 | orchestrator | 2025-05-05 00:53:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:32.803427 | orchestrator | 2025-05-05 00:53:32 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:32.803446 | orchestrator | 2025-05-05 00:53:32 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:35.839603 | orchestrator | 2025-05-05 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:35.839723 | orchestrator | 2025-05-05 00:53:35 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:35.841426 | orchestrator | 2025-05-05 00:53:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:35.843712 | orchestrator | 2025-05-05 00:53:35 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:35.845283 | orchestrator | 2025-05-05 00:53:35 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:38.889959 | orchestrator | 2025-05-05 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:38.890135 | orchestrator | 2025-05-05 00:53:38 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:38.891269 | orchestrator | 2025-05-05 00:53:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:38.893330 | orchestrator | 2025-05-05 00:53:38 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:38.894470 | orchestrator | 2025-05-05 00:53:38 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:41.935070 | orchestrator | 2025-05-05 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:41.935211 | orchestrator | 2025-05-05 00:53:41 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:41.937839 | orchestrator | 2025-05-05 00:53:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:41.939960 | orchestrator | 2025-05-05 00:53:41 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:41.942291 | orchestrator | 2025-05-05 00:53:41 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:44.995772 | orchestrator | 2025-05-05 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:44.995948 | orchestrator | 2025-05-05 00:53:44 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:44.997215 | orchestrator | 2025-05-05 00:53:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:44.997273 | orchestrator | 2025-05-05 00:53:44 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:44.999202 | orchestrator | 2025-05-05 00:53:44 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:48.050520 | orchestrator | 2025-05-05 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:48.050670 | orchestrator | 2025-05-05 00:53:48 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:48.051490 | orchestrator | 2025-05-05 00:53:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:48.052907 | orchestrator | 2025-05-05 00:53:48 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:48.054717 | orchestrator | 2025-05-05 00:53:48 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:51.093829 | orchestrator | 2025-05-05 00:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:51.093996 | orchestrator | 2025-05-05 00:53:51 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:51.095516 | orchestrator | 2025-05-05 00:53:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:54.144840 | orchestrator | 2025-05-05 00:53:51 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:54.144947 | orchestrator | 2025-05-05 00:53:51 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:54.144963 | orchestrator | 2025-05-05 00:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:54.144992 | orchestrator | 2025-05-05 00:53:54 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:54.152762 | orchestrator | 2025-05-05 00:53:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:54.154011 | orchestrator | 2025-05-05 00:53:54 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:54.155485 | orchestrator | 2025-05-05 00:53:54 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:53:57.205201 | orchestrator | 2025-05-05 00:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:53:57.205425 | orchestrator | 2025-05-05 00:53:57 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:53:57.208233 | orchestrator | 2025-05-05 00:53:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:53:57.209095 | orchestrator | 2025-05-05 00:53:57 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:53:57.214516 | orchestrator | 2025-05-05 00:53:57 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:00.251828 | orchestrator | 2025-05-05 00:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:00.251986 | orchestrator | 2025-05-05 00:54:00 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:00.252587 | orchestrator | 2025-05-05 00:54:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:00.253563 | orchestrator | 2025-05-05 00:54:00 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:00.255080 | orchestrator | 2025-05-05 00:54:00 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:00.255764 | orchestrator | 2025-05-05 00:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:03.309173 | orchestrator | 2025-05-05 00:54:03 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:03.309862 | orchestrator | 2025-05-05 00:54:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:03.312432 | orchestrator | 2025-05-05 00:54:03 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:03.314494 | orchestrator | 2025-05-05 00:54:03 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:06.383615 | orchestrator | 2025-05-05 00:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:06.383774 | orchestrator | 2025-05-05 00:54:06 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:06.385481 | orchestrator | 2025-05-05 00:54:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:06.386658 | orchestrator | 2025-05-05 00:54:06 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:06.388847 | orchestrator | 2025-05-05 00:54:06 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:09.443233 | orchestrator | 2025-05-05 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:09.443440 | orchestrator | 2025-05-05 00:54:09 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:09.444826 | orchestrator | 2025-05-05 00:54:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:09.448756 | orchestrator | 2025-05-05 00:54:09 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:09.450955 | orchestrator | 2025-05-05 00:54:09 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:09.451567 | orchestrator | 2025-05-05 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:12.490850 | orchestrator | 2025-05-05 00:54:12 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:12.492711 | orchestrator | 2025-05-05 00:54:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:12.494464 | orchestrator | 2025-05-05 00:54:12 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:12.496598 | orchestrator | 2025-05-05 00:54:12 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:12.496839 | orchestrator | 2025-05-05 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:15.532379 | orchestrator | 2025-05-05 00:54:15 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:15.532964 | orchestrator | 2025-05-05 00:54:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:15.533024 | orchestrator | 2025-05-05 00:54:15 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:15.534131 | orchestrator | 2025-05-05 00:54:15 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:18.578426 | orchestrator | 2025-05-05 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:18.578594 | orchestrator | 2025-05-05 00:54:18 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:18.579100 | orchestrator | 2025-05-05 00:54:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:18.580281 | orchestrator | 2025-05-05 00:54:18 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:18.581566 | orchestrator | 2025-05-05 00:54:18 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:21.638373 | orchestrator | 2025-05-05 00:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:21.638533 | orchestrator | 2025-05-05 00:54:21 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:21.639397 | orchestrator | 2025-05-05 00:54:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:21.641384 | orchestrator | 2025-05-05 00:54:21 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:21.644025 | orchestrator | 2025-05-05 00:54:21 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:24.679196 | orchestrator | 2025-05-05 00:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:24.679470 | orchestrator | 2025-05-05 00:54:24 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:24.679500 | orchestrator | 2025-05-05 00:54:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:24.679531 | orchestrator | 2025-05-05 00:54:24 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:24.679545 | orchestrator | 2025-05-05 00:54:24 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:24.679562 | orchestrator | 2025-05-05 00:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:27.713160 | orchestrator | 2025-05-05 00:54:27 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:27.716930 | orchestrator | 2025-05-05 00:54:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:27.717021 | orchestrator | 2025-05-05 00:54:27 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:27.717248 | orchestrator | 2025-05-05 00:54:27 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:30.759104 | orchestrator | 2025-05-05 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:30.759238 | orchestrator | 2025-05-05 00:54:30 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:30.759744 | orchestrator | 2025-05-05 00:54:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:30.759784 | orchestrator | 2025-05-05 00:54:30 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:30.760261 | orchestrator | 2025-05-05 00:54:30 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:33.804278 | orchestrator | 2025-05-05 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:33.804463 | orchestrator | 2025-05-05 00:54:33 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:33.805511 | orchestrator | 2025-05-05 00:54:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:33.806850 | orchestrator | 2025-05-05 00:54:33 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:33.808169 | orchestrator | 2025-05-05 00:54:33 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:36.859199 | orchestrator | 2025-05-05 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:36.859398 | orchestrator | 2025-05-05 00:54:36 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:36.861289 | orchestrator | 2025-05-05 00:54:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:36.863392 | orchestrator | 2025-05-05 00:54:36 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:36.864786 | orchestrator | 2025-05-05 00:54:36 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:39.910956 | orchestrator | 2025-05-05 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:39.911181 | orchestrator | 2025-05-05 00:54:39 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:39.912443 | orchestrator | 2025-05-05 00:54:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:39.913902 | orchestrator | 2025-05-05 00:54:39 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:39.915552 | orchestrator | 2025-05-05 00:54:39 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:42.962925 | orchestrator | 2025-05-05 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:42.963060 | orchestrator | 2025-05-05 00:54:42 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:42.965545 | orchestrator | 2025-05-05 00:54:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:42.967789 | orchestrator | 2025-05-05 00:54:42 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:42.969739 | orchestrator | 2025-05-05 00:54:42 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:42.970014 | orchestrator | 2025-05-05 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:46.027883 | orchestrator | 2025-05-05 00:54:46 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:46.033549 | orchestrator | 2025-05-05 00:54:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:46.036150 | orchestrator | 2025-05-05 00:54:46 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:46.037755 | orchestrator | 2025-05-05 00:54:46 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:49.087706 | orchestrator | 2025-05-05 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:49.087852 | orchestrator | 2025-05-05 00:54:49 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:49.088822 | orchestrator | 2025-05-05 00:54:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:49.088860 | orchestrator | 2025-05-05 00:54:49 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:49.090107 | orchestrator | 2025-05-05 00:54:49 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:52.139735 | orchestrator | 2025-05-05 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:52.139907 | orchestrator | 2025-05-05 00:54:52 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:52.142490 | orchestrator | 2025-05-05 00:54:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:52.143451 | orchestrator | 2025-05-05 00:54:52 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:52.145410 | orchestrator | 2025-05-05 00:54:52 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:55.191124 | orchestrator | 2025-05-05 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:55.191261 | orchestrator | 2025-05-05 00:54:55 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:55.192528 | orchestrator | 2025-05-05 00:54:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:55.197809 | orchestrator | 2025-05-05 00:54:55 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:55.199152 | orchestrator | 2025-05-05 00:54:55 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:54:58.246906 | orchestrator | 2025-05-05 00:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:54:58.247117 | orchestrator | 2025-05-05 00:54:58 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:54:58.248225 | orchestrator | 2025-05-05 00:54:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:54:58.251954 | orchestrator | 2025-05-05 00:54:58 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:54:58.253597 | orchestrator | 2025-05-05 00:54:58 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:01.297915 | orchestrator | 2025-05-05 00:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:01.298123 | orchestrator | 2025-05-05 00:55:01 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:01.299636 | orchestrator | 2025-05-05 00:55:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:01.301563 | orchestrator | 2025-05-05 00:55:01 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:55:01.303369 | orchestrator | 2025-05-05 00:55:01 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:04.353274 | orchestrator | 2025-05-05 00:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:04.353470 | orchestrator | 2025-05-05 00:55:04 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:04.354803 | orchestrator | 2025-05-05 00:55:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:04.357408 | orchestrator | 2025-05-05 00:55:04 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:55:04.358797 | orchestrator | 2025-05-05 00:55:04 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:07.415897 | orchestrator | 2025-05-05 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:07.416039 | orchestrator | 2025-05-05 00:55:07 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:07.417226 | orchestrator | 2025-05-05 00:55:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:07.418784 | orchestrator | 2025-05-05 00:55:07 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:55:07.420389 | orchestrator | 2025-05-05 00:55:07 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:10.468898 | orchestrator | 2025-05-05 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:10.469042 | orchestrator | 2025-05-05 00:55:10 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:10.471493 | orchestrator | 2025-05-05 00:55:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:10.473126 | orchestrator | 2025-05-05 00:55:10 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:55:10.475001 | orchestrator | 2025-05-05 00:55:10 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:13.527435 | orchestrator | 2025-05-05 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:13.527572 | orchestrator | 2025-05-05 00:55:13 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:13.528832 | orchestrator | 2025-05-05 00:55:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:13.530454 | orchestrator | 2025-05-05 00:55:13 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:55:13.533049 | orchestrator | 2025-05-05 00:55:13 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:16.587917 | orchestrator | 2025-05-05 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:16.588083 | orchestrator | 2025-05-05 00:55:16 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:16.588985 | orchestrator | 2025-05-05 00:55:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:16.592977 | orchestrator | 2025-05-05 00:55:16 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state STARTED 2025-05-05 00:55:16.597971 | orchestrator | 2025-05-05 00:55:16 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:19.663378 | orchestrator | 2025-05-05 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:19.663519 | orchestrator | 2025-05-05 00:55:19 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:19.663731 | orchestrator | 2025-05-05 00:55:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:19.666893 | orchestrator | 2025-05-05 00:55:19 | INFO  | Task e8fc85f1-2e6c-4a21-8ff3-a8fa48cc6cfb is in state SUCCESS 2025-05-05 00:55:19.668800 | orchestrator | 2025-05-05 00:55:19.668837 | orchestrator | None 2025-05-05 00:55:19.668853 | orchestrator | 2025-05-05 00:55:19.668869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:55:19.668884 | orchestrator | 2025-05-05 00:55:19.668900 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:55:19.668915 | orchestrator | Monday 05 May 2025 00:53:08 +0000 (0:00:00.267) 0:00:00.268 ************ 2025-05-05 00:55:19.668931 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:55:19.668947 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:55:19.668979 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:55:19.669119 | orchestrator | 2025-05-05 00:55:19.669135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:55:19.669149 | orchestrator | Monday 05 May 2025 00:53:08 +0000 (0:00:00.304) 0:00:00.572 ************ 2025-05-05 00:55:19.669165 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-05 00:55:19.669179 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-05 00:55:19.669193 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-05 00:55:19.669207 | orchestrator | 2025-05-05 00:55:19.669222 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-05 00:55:19.669236 | orchestrator | 2025-05-05 00:55:19.669250 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-05 00:55:19.669263 | orchestrator | Monday 05 May 2025 00:53:08 +0000 (0:00:00.303) 0:00:00.875 ************ 2025-05-05 00:55:19.669278 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:55:19.669292 | orchestrator | 2025-05-05 00:55:19.669343 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-05 00:55:19.669359 | orchestrator | Monday 05 May 2025 00:53:09 +0000 (0:00:00.526) 0:00:01.402 ************ 2025-05-05 00:55:19.669373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-05 00:55:19.669387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-05 00:55:19.669402 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-05 00:55:19.669415 | orchestrator | 2025-05-05 00:55:19.669430 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-05 00:55:19.669467 | orchestrator | Monday 05 May 2025 00:53:09 +0000 (0:00:00.679) 0:00:02.081 ************ 2025-05-05 00:55:19.669486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.669505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.669531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.669548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.669564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.669588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.669603 | orchestrator | 2025-05-05 00:55:19.669618 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-05 00:55:19.669633 | orchestrator | Monday 05 May 2025 00:53:11 +0000 (0:00:01.708) 0:00:03.790 ************ 2025-05-05 00:55:19.669647 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:55:19.669661 | orchestrator | 2025-05-05 00:55:19.669759 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-05 00:55:19.669779 | orchestrator | Monday 05 May 2025 00:53:12 +0000 (0:00:00.682) 0:00:04.472 ************ 2025-05-05 00:55:19.669806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.669823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.669846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.669862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.669886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.669902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.669926 | orchestrator | 2025-05-05 00:55:19.669940 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-05 00:55:19.669955 | orchestrator | Monday 05 May 2025 00:53:15 +0000 (0:00:02.870) 0:00:07.343 ************ 2025-05-05 00:55:19.669970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:55:19.669985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:55:19.670000 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:55:19.670068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:55:19.670088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:55:19.670112 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:55:19.670127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:55:19.670143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:55:19.670158 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:55:19.670172 | orchestrator | 2025-05-05 00:55:19.670187 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-05 00:55:19.670210 | orchestrator | Monday 05 May 2025 00:53:15 +0000 (0:00:00.681) 0:00:08.024 ************ 2025-05-05 00:55:19.670232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:55:19.670248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:55:19.670271 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:55:19.670286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:55:19.670324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:55:19.670340 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:55:19.670361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-05 00:55:19.670377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-05 00:55:19.670400 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:55:19.670414 | orchestrator | 2025-05-05 00:55:19.670429 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-05 00:55:19.670443 | orchestrator | Monday 05 May 2025 00:53:16 +0000 (0:00:01.075) 0:00:09.100 ************ 2025-05-05 00:55:19.670458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.670473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.670488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.670510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.670532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.670548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.670563 | orchestrator | 2025-05-05 00:55:19.670577 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-05 00:55:19.670592 | orchestrator | Monday 05 May 2025 00:53:19 +0000 (0:00:02.381) 0:00:11.482 ************ 2025-05-05 00:55:19.670606 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:55:19.670620 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:55:19.670634 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:55:19.670648 | orchestrator | 2025-05-05 00:55:19.670662 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-05 00:55:19.670676 | orchestrator | Monday 05 May 2025 00:53:22 +0000 (0:00:03.341) 0:00:14.823 ************ 2025-05-05 00:55:19.670690 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:55:19.670704 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:55:19.670718 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:55:19.670732 | orchestrator | 2025-05-05 00:55:19.670746 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-05 00:55:19.670760 | orchestrator | Monday 05 May 2025 00:53:24 +0000 (0:00:01.533) 0:00:16.356 ************ 2025-05-05 00:55:19.670789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.670805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.670820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-05 00:55:19.670835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.670857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.670879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-05 00:55:19.670895 | orchestrator | 2025-05-05 00:55:19.670910 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-05 00:55:19.670924 | orchestrator | Monday 05 May 2025 00:53:26 +0000 (0:00:02.413) 0:00:18.769 ************ 2025-05-05 00:55:19.670938 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:55:19.670952 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:55:19.670966 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:55:19.670981 | orchestrator | 2025-05-05 00:55:19.670995 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-05 00:55:19.671009 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.399) 0:00:19.168 ************ 2025-05-05 00:55:19.671023 | orchestrator | 2025-05-05 00:55:19.671037 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-05 00:55:19.671052 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.245) 0:00:19.414 ************ 2025-05-05 00:55:19.671066 | orchestrator | 2025-05-05 00:55:19.671080 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-05 00:55:19.671094 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.054) 0:00:19.468 ************ 2025-05-05 00:55:19.671108 | orchestrator | 2025-05-05 00:55:19.671122 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-05 00:55:19.671136 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.053) 0:00:19.522 ************ 2025-05-05 00:55:19.671150 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:55:19.671164 | orchestrator | 2025-05-05 00:55:19.671178 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-05 00:55:19.671192 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.173) 0:00:19.696 ************ 2025-05-05 00:55:19.671206 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:55:19.671220 | orchestrator | 2025-05-05 00:55:19.671234 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-05 00:55:19.671248 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.334) 0:00:20.031 ************ 2025-05-05 00:55:19.671262 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:55:19.671276 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:55:19.671297 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:55:19.671329 | orchestrator | 2025-05-05 00:55:19.671344 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-05 00:55:19.671358 | orchestrator | Monday 05 May 2025 00:53:57 +0000 (0:00:29.122) 0:00:49.153 ************ 2025-05-05 00:55:19.671372 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:55:19.671386 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:55:19.671400 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:55:19.671414 | orchestrator | 2025-05-05 00:55:19.671428 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-05 00:55:19.671442 | orchestrator | Monday 05 May 2025 00:55:04 +0000 (0:01:07.838) 0:01:56.991 ************ 2025-05-05 00:55:19.671456 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:55:19.671470 | orchestrator | 2025-05-05 00:55:19.671484 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-05 00:55:19.671498 | orchestrator | Monday 05 May 2025 00:55:05 +0000 (0:00:00.713) 0:01:57.705 ************ 2025-05-05 00:55:19.671512 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:55:19.671526 | orchestrator | 2025-05-05 00:55:19.671540 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-05 00:55:19.671554 | orchestrator | Monday 05 May 2025 00:55:08 +0000 (0:00:02.649) 0:02:00.354 ************ 2025-05-05 00:55:19.671569 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:55:19.671583 | orchestrator | 2025-05-05 00:55:19.671597 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-05 00:55:19.671616 | orchestrator | Monday 05 May 2025 00:55:10 +0000 (0:00:02.632) 0:02:02.986 ************ 2025-05-05 00:55:19.671630 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:55:19.671644 | orchestrator | 2025-05-05 00:55:19.671658 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-05 00:55:19.671673 | orchestrator | Monday 05 May 2025 00:55:13 +0000 (0:00:02.979) 0:02:05.966 ************ 2025-05-05 00:55:19.671687 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:55:19.671701 | orchestrator | 2025-05-05 00:55:19.671720 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:55:22.728830 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 00:55:22.728960 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:55:22.728979 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-05 00:55:22.728995 | orchestrator | 2025-05-05 00:55:22.729010 | orchestrator | 2025-05-05 00:55:22.729025 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:55:22.729040 | orchestrator | Monday 05 May 2025 00:55:16 +0000 (0:00:03.089) 0:02:09.056 ************ 2025-05-05 00:55:22.729054 | orchestrator | =============================================================================== 2025-05-05 00:55:22.729068 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 67.84s 2025-05-05 00:55:22.729082 | orchestrator | opensearch : Restart opensearch container ------------------------------ 29.12s 2025-05-05 00:55:22.729096 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.34s 2025-05-05 00:55:22.729115 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.09s 2025-05-05 00:55:22.729138 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.98s 2025-05-05 00:55:22.729162 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.87s 2025-05-05 00:55:22.729187 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.65s 2025-05-05 00:55:22.729207 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.63s 2025-05-05 00:55:22.729255 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.41s 2025-05-05 00:55:22.729270 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2025-05-05 00:55:22.729284 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-05-05 00:55:22.729299 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.53s 2025-05-05 00:55:22.729405 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.08s 2025-05-05 00:55:22.729421 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-05-05 00:55:22.729438 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-05-05 00:55:22.729454 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.68s 2025-05-05 00:55:22.729486 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2025-05-05 00:55:22.729502 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-05-05 00:55:22.729518 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.40s 2025-05-05 00:55:22.729534 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.35s 2025-05-05 00:55:22.729550 | orchestrator | 2025-05-05 00:55:19 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:22.729566 | orchestrator | 2025-05-05 00:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:22.729600 | orchestrator | 2025-05-05 00:55:22 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:22.730411 | orchestrator | 2025-05-05 00:55:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:22.731932 | orchestrator | 2025-05-05 00:55:22 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:25.791206 | orchestrator | 2025-05-05 00:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:25.791365 | orchestrator | 2025-05-05 00:55:25 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:25.794208 | orchestrator | 2025-05-05 00:55:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:25.799830 | orchestrator | 2025-05-05 00:55:25 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:28.836847 | orchestrator | 2025-05-05 00:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:28.836998 | orchestrator | 2025-05-05 00:55:28 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:28.841403 | orchestrator | 2025-05-05 00:55:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:31.897555 | orchestrator | 2025-05-05 00:55:28 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:31.897666 | orchestrator | 2025-05-05 00:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:31.897701 | orchestrator | 2025-05-05 00:55:31 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:31.898231 | orchestrator | 2025-05-05 00:55:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:31.900134 | orchestrator | 2025-05-05 00:55:31 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:34.951005 | orchestrator | 2025-05-05 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:34.951127 | orchestrator | 2025-05-05 00:55:34 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:34.952291 | orchestrator | 2025-05-05 00:55:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:34.955866 | orchestrator | 2025-05-05 00:55:34 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:38.016042 | orchestrator | 2025-05-05 00:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:38.016180 | orchestrator | 2025-05-05 00:55:38 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:38.017828 | orchestrator | 2025-05-05 00:55:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:38.019514 | orchestrator | 2025-05-05 00:55:38 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:41.080912 | orchestrator | 2025-05-05 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:41.081055 | orchestrator | 2025-05-05 00:55:41 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:41.082881 | orchestrator | 2025-05-05 00:55:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:41.084639 | orchestrator | 2025-05-05 00:55:41 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:44.145687 | orchestrator | 2025-05-05 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:44.145881 | orchestrator | 2025-05-05 00:55:44 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:44.148874 | orchestrator | 2025-05-05 00:55:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:44.150182 | orchestrator | 2025-05-05 00:55:44 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:47.201650 | orchestrator | 2025-05-05 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:47.201750 | orchestrator | 2025-05-05 00:55:47 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:47.203526 | orchestrator | 2025-05-05 00:55:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:47.204326 | orchestrator | 2025-05-05 00:55:47 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:47.204530 | orchestrator | 2025-05-05 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:50.255495 | orchestrator | 2025-05-05 00:55:50 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:50.257190 | orchestrator | 2025-05-05 00:55:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:50.261334 | orchestrator | 2025-05-05 00:55:50 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:53.308873 | orchestrator | 2025-05-05 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:53.308969 | orchestrator | 2025-05-05 00:55:53 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:53.310509 | orchestrator | 2025-05-05 00:55:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:53.312949 | orchestrator | 2025-05-05 00:55:53 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:56.367761 | orchestrator | 2025-05-05 00:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:56.367899 | orchestrator | 2025-05-05 00:55:56 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:56.370070 | orchestrator | 2025-05-05 00:55:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:56.371588 | orchestrator | 2025-05-05 00:55:56 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:55:59.438394 | orchestrator | 2025-05-05 00:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:55:59.438536 | orchestrator | 2025-05-05 00:55:59 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:55:59.440232 | orchestrator | 2025-05-05 00:55:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:55:59.443650 | orchestrator | 2025-05-05 00:55:59 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:02.496733 | orchestrator | 2025-05-05 00:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:02.496891 | orchestrator | 2025-05-05 00:56:02 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:02.498663 | orchestrator | 2025-05-05 00:56:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:02.500660 | orchestrator | 2025-05-05 00:56:02 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:05.547435 | orchestrator | 2025-05-05 00:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:05.547563 | orchestrator | 2025-05-05 00:56:05 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:05.548889 | orchestrator | 2025-05-05 00:56:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:05.550629 | orchestrator | 2025-05-05 00:56:05 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:08.594736 | orchestrator | 2025-05-05 00:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:08.594876 | orchestrator | 2025-05-05 00:56:08 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:08.596448 | orchestrator | 2025-05-05 00:56:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:08.598181 | orchestrator | 2025-05-05 00:56:08 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:11.640842 | orchestrator | 2025-05-05 00:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:11.640988 | orchestrator | 2025-05-05 00:56:11 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:11.642586 | orchestrator | 2025-05-05 00:56:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:11.644237 | orchestrator | 2025-05-05 00:56:11 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:14.695599 | orchestrator | 2025-05-05 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:14.695730 | orchestrator | 2025-05-05 00:56:14 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:14.698135 | orchestrator | 2025-05-05 00:56:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:14.701221 | orchestrator | 2025-05-05 00:56:14 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:14.702499 | orchestrator | 2025-05-05 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:17.753074 | orchestrator | 2025-05-05 00:56:17 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:17.753442 | orchestrator | 2025-05-05 00:56:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:17.753470 | orchestrator | 2025-05-05 00:56:17 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:20.801061 | orchestrator | 2025-05-05 00:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:20.801217 | orchestrator | 2025-05-05 00:56:20 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:20.801591 | orchestrator | 2025-05-05 00:56:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:20.802390 | orchestrator | 2025-05-05 00:56:20 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:20.802502 | orchestrator | 2025-05-05 00:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:23.849259 | orchestrator | 2025-05-05 00:56:23 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:23.850620 | orchestrator | 2025-05-05 00:56:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:23.852933 | orchestrator | 2025-05-05 00:56:23 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:23.853498 | orchestrator | 2025-05-05 00:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:26.906932 | orchestrator | 2025-05-05 00:56:26 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:29.974204 | orchestrator | 2025-05-05 00:56:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:29.974369 | orchestrator | 2025-05-05 00:56:26 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:29.974393 | orchestrator | 2025-05-05 00:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:29.974428 | orchestrator | 2025-05-05 00:56:29 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state STARTED 2025-05-05 00:56:29.976504 | orchestrator | 2025-05-05 00:56:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:29.978983 | orchestrator | 2025-05-05 00:56:29 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state STARTED 2025-05-05 00:56:29.979545 | orchestrator | 2025-05-05 00:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:33.028819 | orchestrator | 2025-05-05 00:56:33 | INFO  | Task fb8c08f3-1643-4307-9bcb-8e2211b41707 is in state SUCCESS 2025-05-05 00:56:33.032149 | orchestrator | 2025-05-05 00:56:33.032722 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-05 00:56:33.032765 | orchestrator | 2025-05-05 00:56:33.032781 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-05 00:56:33.032796 | orchestrator | 2025-05-05 00:56:33.032811 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-05 00:56:33.032825 | orchestrator | Monday 05 May 2025 00:43:51 +0000 (0:00:01.484) 0:00:01.484 ************ 2025-05-05 00:56:33.032883 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.032899 | orchestrator | 2025-05-05 00:56:33.032913 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-05 00:56:33.032944 | orchestrator | Monday 05 May 2025 00:43:52 +0000 (0:00:01.031) 0:00:02.516 ************ 2025-05-05 00:56:33.032960 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.032974 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-05 00:56:33.032989 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-05 00:56:33.033235 | orchestrator | 2025-05-05 00:56:33.033251 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-05 00:56:33.033459 | orchestrator | Monday 05 May 2025 00:43:52 +0000 (0:00:00.550) 0:00:03.067 ************ 2025-05-05 00:56:33.033498 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.033520 | orchestrator | 2025-05-05 00:56:33.033534 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-05 00:56:33.033548 | orchestrator | Monday 05 May 2025 00:43:53 +0000 (0:00:01.064) 0:00:04.131 ************ 2025-05-05 00:56:33.033562 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.033578 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.033592 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.033607 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.033621 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.033635 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.033650 | orchestrator | 2025-05-05 00:56:33.033927 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-05 00:56:33.033950 | orchestrator | Monday 05 May 2025 00:43:55 +0000 (0:00:01.400) 0:00:05.531 ************ 2025-05-05 00:56:33.033964 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.033980 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.033994 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.034140 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.034163 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.034178 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.034193 | orchestrator | 2025-05-05 00:56:33.034208 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-05 00:56:33.034228 | orchestrator | Monday 05 May 2025 00:43:56 +0000 (0:00:00.892) 0:00:06.424 ************ 2025-05-05 00:56:33.034243 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.034258 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.034273 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.034288 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.034302 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.034339 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.034354 | orchestrator | 2025-05-05 00:56:33.034368 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-05 00:56:33.034382 | orchestrator | Monday 05 May 2025 00:43:57 +0000 (0:00:01.069) 0:00:07.494 ************ 2025-05-05 00:56:33.034396 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.034410 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.034424 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.034438 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.034452 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.034477 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.034491 | orchestrator | 2025-05-05 00:56:33.034505 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-05 00:56:33.034519 | orchestrator | Monday 05 May 2025 00:43:58 +0000 (0:00:00.860) 0:00:08.355 ************ 2025-05-05 00:56:33.034533 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.034548 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.034561 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.034578 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.034594 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.034609 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.034625 | orchestrator | 2025-05-05 00:56:33.034641 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-05 00:56:33.034658 | orchestrator | Monday 05 May 2025 00:43:59 +0000 (0:00:00.888) 0:00:09.244 ************ 2025-05-05 00:56:33.034673 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.034689 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.034704 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.034720 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.034736 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.034752 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.034769 | orchestrator | 2025-05-05 00:56:33.034785 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-05 00:56:33.034815 | orchestrator | Monday 05 May 2025 00:43:59 +0000 (0:00:00.915) 0:00:10.160 ************ 2025-05-05 00:56:33.034832 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.034849 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.034865 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.034880 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.034897 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.034914 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.034930 | orchestrator | 2025-05-05 00:56:33.034944 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-05 00:56:33.034959 | orchestrator | Monday 05 May 2025 00:44:01 +0000 (0:00:01.149) 0:00:11.309 ************ 2025-05-05 00:56:33.034973 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.034987 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.035001 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.035015 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.035029 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.035043 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.035056 | orchestrator | 2025-05-05 00:56:33.035418 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-05 00:56:33.035447 | orchestrator | Monday 05 May 2025 00:44:02 +0000 (0:00:00.958) 0:00:12.267 ************ 2025-05-05 00:56:33.035462 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.035477 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:56:33.035491 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:56:33.035505 | orchestrator | 2025-05-05 00:56:33.035520 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-05 00:56:33.035535 | orchestrator | Monday 05 May 2025 00:44:03 +0000 (0:00:01.017) 0:00:13.285 ************ 2025-05-05 00:56:33.035549 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.035563 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.035577 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.035591 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.035606 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.035620 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.035634 | orchestrator | 2025-05-05 00:56:33.035648 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-05 00:56:33.035662 | orchestrator | Monday 05 May 2025 00:44:04 +0000 (0:00:01.432) 0:00:14.717 ************ 2025-05-05 00:56:33.035677 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.036372 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:56:33.036408 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:56:33.036422 | orchestrator | 2025-05-05 00:56:33.036435 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-05 00:56:33.036446 | orchestrator | Monday 05 May 2025 00:44:07 +0000 (0:00:03.303) 0:00:18.021 ************ 2025-05-05 00:56:33.036457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.036468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.036479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.036490 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.036500 | orchestrator | 2025-05-05 00:56:33.036511 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-05 00:56:33.036528 | orchestrator | Monday 05 May 2025 00:44:08 +0000 (0:00:00.440) 0:00:18.461 ************ 2025-05-05 00:56:33.036539 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036563 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036574 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036584 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.036595 | orchestrator | 2025-05-05 00:56:33.036605 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-05 00:56:33.036615 | orchestrator | Monday 05 May 2025 00:44:09 +0000 (0:00:00.773) 0:00:19.235 ************ 2025-05-05 00:56:33.036627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036639 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036650 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036660 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.036671 | orchestrator | 2025-05-05 00:56:33.036681 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-05 00:56:33.036759 | orchestrator | Monday 05 May 2025 00:44:09 +0000 (0:00:00.269) 0:00:19.505 ************ 2025-05-05 00:56:33.036778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-05 00:44:05.306905', 'end': '2025-05-05 00:44:05.566373', 'delta': '0:00:00.259468', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036792 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-05 00:44:06.235596', 'end': '2025-05-05 00:44:06.502957', 'delta': '0:00:00.267361', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-05 00:44:07.213170', 'end': '2025-05-05 00:44:07.647289', 'delta': '0:00:00.434119', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-05 00:56:33.036822 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.036833 | orchestrator | 2025-05-05 00:56:33.036843 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-05 00:56:33.036854 | orchestrator | Monday 05 May 2025 00:44:09 +0000 (0:00:00.234) 0:00:19.739 ************ 2025-05-05 00:56:33.036864 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.036875 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.036885 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.036895 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.036905 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.036915 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.036926 | orchestrator | 2025-05-05 00:56:33.036936 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-05 00:56:33.036947 | orchestrator | Monday 05 May 2025 00:44:11 +0000 (0:00:01.630) 0:00:21.369 ************ 2025-05-05 00:56:33.036957 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.036967 | orchestrator | 2025-05-05 00:56:33.036977 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-05 00:56:33.036987 | orchestrator | Monday 05 May 2025 00:44:11 +0000 (0:00:00.684) 0:00:22.053 ************ 2025-05-05 00:56:33.036997 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.037008 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.037019 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.037029 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.037040 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.037050 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.037060 | orchestrator | 2025-05-05 00:56:33.037071 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-05 00:56:33.037081 | orchestrator | Monday 05 May 2025 00:44:12 +0000 (0:00:00.485) 0:00:22.539 ************ 2025-05-05 00:56:33.037091 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.037101 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.037112 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.037921 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.037945 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.037955 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.037965 | orchestrator | 2025-05-05 00:56:33.037976 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-05 00:56:33.037987 | orchestrator | Monday 05 May 2025 00:44:13 +0000 (0:00:01.433) 0:00:23.972 ************ 2025-05-05 00:56:33.037997 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038008 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.038057 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.038070 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.038081 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.038092 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.038102 | orchestrator | 2025-05-05 00:56:33.038113 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-05 00:56:33.038124 | orchestrator | Monday 05 May 2025 00:44:14 +0000 (0:00:00.829) 0:00:24.802 ************ 2025-05-05 00:56:33.038200 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038214 | orchestrator | 2025-05-05 00:56:33.038225 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-05 00:56:33.038236 | orchestrator | Monday 05 May 2025 00:44:14 +0000 (0:00:00.316) 0:00:25.118 ************ 2025-05-05 00:56:33.038484 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038499 | orchestrator | 2025-05-05 00:56:33.038539 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-05 00:56:33.038551 | orchestrator | Monday 05 May 2025 00:44:15 +0000 (0:00:00.284) 0:00:25.402 ************ 2025-05-05 00:56:33.038562 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038573 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.038584 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.038595 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.038606 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.038616 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.038627 | orchestrator | 2025-05-05 00:56:33.038638 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-05 00:56:33.038650 | orchestrator | Monday 05 May 2025 00:44:16 +0000 (0:00:00.824) 0:00:26.227 ************ 2025-05-05 00:56:33.038661 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038672 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.038683 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.038694 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.038705 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.038715 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.038726 | orchestrator | 2025-05-05 00:56:33.038737 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-05 00:56:33.038748 | orchestrator | Monday 05 May 2025 00:44:16 +0000 (0:00:00.780) 0:00:27.007 ************ 2025-05-05 00:56:33.038759 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038770 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.038780 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.038792 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.038802 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.038813 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.038824 | orchestrator | 2025-05-05 00:56:33.038835 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-05 00:56:33.038846 | orchestrator | Monday 05 May 2025 00:44:17 +0000 (0:00:00.547) 0:00:27.554 ************ 2025-05-05 00:56:33.038857 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038867 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.038878 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.038889 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.038900 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.038911 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.038922 | orchestrator | 2025-05-05 00:56:33.038933 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-05 00:56:33.038944 | orchestrator | Monday 05 May 2025 00:44:18 +0000 (0:00:00.866) 0:00:28.421 ************ 2025-05-05 00:56:33.038955 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.038966 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.038977 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.038987 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.038998 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.039009 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.039020 | orchestrator | 2025-05-05 00:56:33.039031 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-05 00:56:33.039042 | orchestrator | Monday 05 May 2025 00:44:18 +0000 (0:00:00.626) 0:00:29.047 ************ 2025-05-05 00:56:33.039053 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.039064 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.039074 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.039397 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.039408 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.039418 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.039428 | orchestrator | 2025-05-05 00:56:33.039445 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-05 00:56:33.039465 | orchestrator | Monday 05 May 2025 00:44:19 +0000 (0:00:00.755) 0:00:29.803 ************ 2025-05-05 00:56:33.039475 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.039484 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.039492 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.039501 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.039510 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.039524 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.039534 | orchestrator | 2025-05-05 00:56:33.039543 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-05 00:56:33.039552 | orchestrator | Monday 05 May 2025 00:44:20 +0000 (0:00:00.627) 0:00:30.431 ************ 2025-05-05 00:56:33.039562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part1', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part14', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part15', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part16', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.039828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d84f93e-1c6d-4691-b492-2a4ac16c3944', 'scsi-SQEMU_QEMU_HARDDISK_3d84f93e-1c6d-4691-b492-2a4ac16c3944'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.039841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_538b5ef1-8671-4fc9-a3c4-cba69448f95c', 'scsi-SQEMU_QEMU_HARDDISK_538b5ef1-8671-4fc9-a3c4-cba69448f95c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.039851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.039994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b4716be-1a57-4f60-96f3-25458ff8018c', 'scsi-SQEMU_QEMU_HARDDISK_6b4716be-1a57-4f60-96f3-25458ff8018c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040138 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.040207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3', 'scsi-SQEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part1', 'scsi-SQEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part14', 'scsi-SQEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part15', 'scsi-SQEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part16', 'scsi-SQEMU_QEMU_HARDDISK_daf28dc1-fcee-4d4a-964d-2a80f7bc2af3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf82fd11-af58-4978-8cf4-434466d92b22', 'scsi-SQEMU_QEMU_HARDDISK_cf82fd11-af58-4978-8cf4-434466d92b22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cdb59ba-b27c-4aba-91f1-5fb12951bb58', 'scsi-SQEMU_QEMU_HARDDISK_4cdb59ba-b27c-4aba-91f1-5fb12951bb58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06746298-857f-44a7-bac9-458d0cb80917', 'scsi-SQEMU_QEMU_HARDDISK_06746298-857f-44a7-bac9-458d0cb80917'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040402 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.040411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92', 'scsi-SQEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part1', 'scsi-SQEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part14', 'scsi-SQEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part15', 'scsi-SQEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part16', 'scsi-SQEMU_QEMU_HARDDISK_1816abff-2c25-4262-8906-0081839edd92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fe28f7c-d5bd-43b5-ae36-5544cd531e3f', 'scsi-SQEMU_QEMU_HARDDISK_4fe28f7c-d5bd-43b5-ae36-5544cd531e3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_408b0152-937f-48ea-b624-2492cd2dac87', 'scsi-SQEMU_QEMU_HARDDISK_408b0152-937f-48ea-b624-2492cd2dac87'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2284275b-81dd-4b13-b1ce-7a79fe4b7203', 'scsi-SQEMU_QEMU_HARDDISK_2284275b-81dd-4b13-b1ce-7a79fe4b7203'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b45d62aa--c8ca--51ec--bff2--6c96656db621-osd--block--b45d62aa--c8ca--51ec--bff2--6c96656db621', 'dm-uuid-LVM-hnmgOczfBJunDr1vwEvWDejbUNuXDIdyFAgOBK6ZbyjK5dwz2J33ScNK1h9SrZgs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac6a629e--412f--52b8--abc2--7f30e47159be-osd--block--ac6a629e--412f--52b8--abc2--7f30e47159be', 'dm-uuid-LVM-vQsQ946lcJ2Gx4z82zLL3f8f7WZpY02FQ74UNF8fMdtuEc7kQmbe8B7IY1X70JwQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040621 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.040630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b45d62aa--c8ca--51ec--bff2--6c96656db621-osd--block--b45d62aa--c8ca--51ec--bff2--6c96656db621'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SXdFIF-7MUr-cpGo-XIOC-Axp8-F4Td-E7diUm', 'scsi-0QEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7', 'scsi-SQEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ac6a629e--412f--52b8--abc2--7f30e47159be-osd--block--ac6a629e--412f--52b8--abc2--7f30e47159be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NWZ3aZ-1A0u-tZ37-0K7f-NDWE-eBlJ-Uoe6Pz', 'scsi-0QEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6', 'scsi-SQEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8', 'scsi-SQEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.040840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f-osd--block--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f', 'dm-uuid-LVM-klE3QE9qijUUyOsOiKdGx1JXqX2wl0UDOeidRx9ZMXtG3iUc6PlvnvMVxAew4ir4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dbbf782--cf90--597f--b1d9--d891fd7b35f3-osd--block--1dbbf782--cf90--597f--b1d9--d891fd7b35f3', 'dm-uuid-LVM-xw3KxQooL3tY7dpsf9NDBb8HRiuK20YIhVAffn2UJNeESfvGpN0EZwPNMbuP1Xhi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040932 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.040945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.040998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part1', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part14', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part15', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part16', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f-osd--block--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eeBbD7-r2vB-eYNZ-f0Pz-OaS4-Agz1-pbcdzj', 'scsi-0QEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164', 'scsi-SQEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1dbbf782--cf90--597f--b1d9--d891fd7b35f3-osd--block--1dbbf782--cf90--597f--b1d9--d891fd7b35f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ao8opF-U5PX-1ZHV-JYce-Qej7-j4Ty-xArusv', 'scsi-0QEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170', 'scsi-SQEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19ded391--41bb--58c4--acef--51f998367f5e-osd--block--19ded391--41bb--58c4--acef--51f998367f5e', 'dm-uuid-LVM-iP9n6Su8uagSXZCmHykXetfNCMzJp85hXNsiqszFZRXMAFHTd69p76ijZC3CBpGQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e', 'scsi-SQEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e-osd--block--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e', 'dm-uuid-LVM-QcoMKqkYKnqoYLB7gRNJ5H919jNE74oCUkSZbjnDQns5OA7mOeS3YBStbwOsjCDz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041163 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.041173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:56:33.041324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--19ded391--41bb--58c4--acef--51f998367f5e-osd--block--19ded391--41bb--58c4--acef--51f998367f5e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xYYKKT-4p1b-FemN-hazU-fS5q-TY3A-e9eZXz', 'scsi-0QEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370', 'scsi-SQEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e-osd--block--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-riweGy-1qvw-WXU0-xyl9-7Pb2-SY2q-iapc2L', 'scsi-0QEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10', 'scsi-SQEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d', 'scsi-SQEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:56:33.041443 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.041452 | orchestrator | 2025-05-05 00:56:33.041461 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-05 00:56:33.041470 | orchestrator | Monday 05 May 2025 00:44:21 +0000 (0:00:01.317) 0:00:31.748 ************ 2025-05-05 00:56:33.041479 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.041488 | orchestrator | 2025-05-05 00:56:33.041497 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-05 00:56:33.041505 | orchestrator | Monday 05 May 2025 00:44:21 +0000 (0:00:00.256) 0:00:32.004 ************ 2025-05-05 00:56:33.041514 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.041523 | orchestrator | 2025-05-05 00:56:33.041531 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-05 00:56:33.041540 | orchestrator | Monday 05 May 2025 00:44:21 +0000 (0:00:00.155) 0:00:32.160 ************ 2025-05-05 00:56:33.041548 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.041557 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.041566 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.041575 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.041583 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.041592 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.041600 | orchestrator | 2025-05-05 00:56:33.041609 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-05 00:56:33.041618 | orchestrator | Monday 05 May 2025 00:44:22 +0000 (0:00:00.694) 0:00:32.855 ************ 2025-05-05 00:56:33.041631 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.041640 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.041649 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.041657 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.041666 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.041675 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.041683 | orchestrator | 2025-05-05 00:56:33.041692 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-05 00:56:33.041700 | orchestrator | Monday 05 May 2025 00:44:23 +0000 (0:00:01.164) 0:00:34.019 ************ 2025-05-05 00:56:33.041709 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.041718 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.041726 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.041735 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.041743 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.041752 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.041771 | orchestrator | 2025-05-05 00:56:33.041781 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-05 00:56:33.041791 | orchestrator | Monday 05 May 2025 00:44:24 +0000 (0:00:00.617) 0:00:34.636 ************ 2025-05-05 00:56:33.041800 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.041809 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.041818 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.041827 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.041836 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.041891 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.041905 | orchestrator | 2025-05-05 00:56:33.041915 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-05 00:56:33.041924 | orchestrator | Monday 05 May 2025 00:44:25 +0000 (0:00:00.982) 0:00:35.619 ************ 2025-05-05 00:56:33.041934 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.041943 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.041952 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.041961 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.041970 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.041979 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.041988 | orchestrator | 2025-05-05 00:56:33.041998 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-05 00:56:33.042007 | orchestrator | Monday 05 May 2025 00:44:26 +0000 (0:00:00.835) 0:00:36.455 ************ 2025-05-05 00:56:33.042038 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.042049 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.042058 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.042068 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.042077 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.042086 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.042095 | orchestrator | 2025-05-05 00:56:33.042104 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-05 00:56:33.042114 | orchestrator | Monday 05 May 2025 00:44:27 +0000 (0:00:01.279) 0:00:37.734 ************ 2025-05-05 00:56:33.042123 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.042132 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.042141 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.042150 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.042159 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.042173 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.042182 | orchestrator | 2025-05-05 00:56:33.042192 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-05 00:56:33.042201 | orchestrator | Monday 05 May 2025 00:44:28 +0000 (0:00:00.893) 0:00:38.627 ************ 2025-05-05 00:56:33.042211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:56:33.042220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.042231 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-05 00:56:33.042246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:56:33.042256 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-05 00:56:33.042265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.042275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:56:33.042284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-05 00:56:33.042297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.042351 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-05 00:56:33.042362 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.042371 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.042421 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-05 00:56:33.042432 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.042441 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:56:33.042451 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-05 00:56:33.042459 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.042468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:56:33.042477 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:56:33.042486 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.042495 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:56:33.042504 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:56:33.042512 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:56:33.042521 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.042530 | orchestrator | 2025-05-05 00:56:33.042539 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-05 00:56:33.042548 | orchestrator | Monday 05 May 2025 00:44:31 +0000 (0:00:03.199) 0:00:41.827 ************ 2025-05-05 00:56:33.042556 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.042565 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-05 00:56:33.042574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.042583 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-05 00:56:33.042591 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-05 00:56:33.042600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.042609 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-05 00:56:33.042618 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.042627 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-05 00:56:33.042635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:56:33.042644 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.042653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:56:33.042662 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-05 00:56:33.042670 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.042679 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:56:33.042688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:56:33.042696 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.042705 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:56:33.042714 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:56:33.042723 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:56:33.042774 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:56:33.042786 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.042795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:56:33.042811 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.042820 | orchestrator | 2025-05-05 00:56:33.042828 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-05 00:56:33.042837 | orchestrator | Monday 05 May 2025 00:44:33 +0000 (0:00:02.170) 0:00:43.997 ************ 2025-05-05 00:56:33.042845 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.042854 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-05 00:56:33.042863 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-05 00:56:33.042871 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-05 00:56:33.042880 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-05 00:56:33.042888 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-05 00:56:33.042897 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-05 00:56:33.042905 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-05 00:56:33.042914 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-05 00:56:33.042922 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-05 00:56:33.042931 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-05 00:56:33.042939 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-05 00:56:33.042948 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-05 00:56:33.042956 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-05 00:56:33.042964 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-05 00:56:33.042973 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-05 00:56:33.042981 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-05 00:56:33.042990 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-05 00:56:33.042998 | orchestrator | 2025-05-05 00:56:33.043007 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-05 00:56:33.043015 | orchestrator | Monday 05 May 2025 00:44:38 +0000 (0:00:04.295) 0:00:48.293 ************ 2025-05-05 00:56:33.043024 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.043032 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.043208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.043217 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.043225 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-05 00:56:33.043233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-05 00:56:33.043241 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-05 00:56:33.043249 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-05 00:56:33.043257 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-05 00:56:33.043265 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.043274 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-05 00:56:33.043286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:56:33.043294 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.043303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:56:33.043327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:56:33.043335 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:56:33.043343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:56:33.043351 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:56:33.043359 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.043367 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.043375 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:56:33.043384 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:56:33.043392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:56:33.043406 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.043414 | orchestrator | 2025-05-05 00:56:33.043422 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-05 00:56:33.043430 | orchestrator | Monday 05 May 2025 00:44:39 +0000 (0:00:01.048) 0:00:49.341 ************ 2025-05-05 00:56:33.043438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.043452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.043460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.043468 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-05 00:56:33.043476 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-05 00:56:33.043484 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-05 00:56:33.043537 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.043546 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-05 00:56:33.043554 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-05 00:56:33.043562 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-05 00:56:33.043570 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.043578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:56:33.043643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:56:33.043652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:56:33.043660 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.043668 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.043676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:56:33.043726 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:56:33.043765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:56:33.043774 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:56:33.043782 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.043791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:56:33.043799 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:56:33.043807 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.043815 | orchestrator | 2025-05-05 00:56:33.043823 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-05 00:56:33.043832 | orchestrator | Monday 05 May 2025 00:44:40 +0000 (0:00:01.021) 0:00:50.363 ************ 2025-05-05 00:56:33.043840 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-05 00:56:33.043848 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:56:33.043857 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:56:33.043865 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:56:33.043873 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-05 00:56:33.043881 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:56:33.043890 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:56:33.043898 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:56:33.043906 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-05 00:56:33.043914 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:56:33.043922 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:56:33.043937 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:56:33.043946 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:56:33.043954 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:56:33.043962 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:56:33.043970 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.043979 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.043987 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:56:33.043995 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:56:33.044003 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:56:33.044012 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044020 | orchestrator | 2025-05-05 00:56:33.044028 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-05 00:56:33.044075 | orchestrator | Monday 05 May 2025 00:44:41 +0000 (0:00:01.344) 0:00:51.708 ************ 2025-05-05 00:56:33.044087 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.044096 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.044104 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.044113 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.044121 | orchestrator | 2025-05-05 00:56:33.044129 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.044138 | orchestrator | Monday 05 May 2025 00:44:43 +0000 (0:00:01.719) 0:00:53.427 ************ 2025-05-05 00:56:33.044146 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044154 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.044163 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044171 | orchestrator | 2025-05-05 00:56:33.044179 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.044188 | orchestrator | Monday 05 May 2025 00:44:44 +0000 (0:00:00.938) 0:00:54.365 ************ 2025-05-05 00:56:33.044196 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044204 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.044212 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044220 | orchestrator | 2025-05-05 00:56:33.044229 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.044237 | orchestrator | Monday 05 May 2025 00:44:44 +0000 (0:00:00.727) 0:00:55.093 ************ 2025-05-05 00:56:33.044245 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044254 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.044262 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044270 | orchestrator | 2025-05-05 00:56:33.044279 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.044287 | orchestrator | Monday 05 May 2025 00:44:45 +0000 (0:00:00.720) 0:00:55.814 ************ 2025-05-05 00:56:33.044295 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.044303 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.044326 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.044334 | orchestrator | 2025-05-05 00:56:33.044343 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.044392 | orchestrator | Monday 05 May 2025 00:44:46 +0000 (0:00:00.911) 0:00:56.725 ************ 2025-05-05 00:56:33.044403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.044411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.044419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.044434 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044443 | orchestrator | 2025-05-05 00:56:33.044451 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.044459 | orchestrator | Monday 05 May 2025 00:44:47 +0000 (0:00:00.803) 0:00:57.529 ************ 2025-05-05 00:56:33.044467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.044475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.044483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.044491 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044499 | orchestrator | 2025-05-05 00:56:33.044508 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.044516 | orchestrator | Monday 05 May 2025 00:44:48 +0000 (0:00:00.771) 0:00:58.301 ************ 2025-05-05 00:56:33.044523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.044532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.044540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.044548 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044560 | orchestrator | 2025-05-05 00:56:33.044569 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.044577 | orchestrator | Monday 05 May 2025 00:44:49 +0000 (0:00:01.143) 0:00:59.444 ************ 2025-05-05 00:56:33.044585 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.044593 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.044601 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.044609 | orchestrator | 2025-05-05 00:56:33.044617 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.044625 | orchestrator | Monday 05 May 2025 00:44:49 +0000 (0:00:00.659) 0:01:00.104 ************ 2025-05-05 00:56:33.044634 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-05 00:56:33.044642 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-05 00:56:33.044654 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-05 00:56:33.044662 | orchestrator | 2025-05-05 00:56:33.044670 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.044678 | orchestrator | Monday 05 May 2025 00:44:51 +0000 (0:00:01.773) 0:01:01.877 ************ 2025-05-05 00:56:33.044686 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044694 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.044702 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044711 | orchestrator | 2025-05-05 00:56:33.044719 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.044727 | orchestrator | Monday 05 May 2025 00:44:52 +0000 (0:00:00.614) 0:01:02.492 ************ 2025-05-05 00:56:33.044735 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044743 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.044751 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044759 | orchestrator | 2025-05-05 00:56:33.044767 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.044775 | orchestrator | Monday 05 May 2025 00:44:52 +0000 (0:00:00.628) 0:01:03.120 ************ 2025-05-05 00:56:33.044783 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.044791 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044799 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.044807 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.044815 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.044823 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044832 | orchestrator | 2025-05-05 00:56:33.044840 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.044848 | orchestrator | Monday 05 May 2025 00:44:53 +0000 (0:00:00.537) 0:01:03.657 ************ 2025-05-05 00:56:33.044856 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.044868 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044877 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.044885 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.044893 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.044901 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.044909 | orchestrator | 2025-05-05 00:56:33.044921 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.044930 | orchestrator | Monday 05 May 2025 00:44:54 +0000 (0:00:00.756) 0:01:04.414 ************ 2025-05-05 00:56:33.044938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.044946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.044954 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:56:33.044962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.044970 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.044978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:56:33.044985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:56:33.044993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:56:33.045002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:56:33.045009 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.045062 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:56:33.045074 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.045084 | orchestrator | 2025-05-05 00:56:33.045093 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-05 00:56:33.045102 | orchestrator | Monday 05 May 2025 00:44:55 +0000 (0:00:00.804) 0:01:05.219 ************ 2025-05-05 00:56:33.045111 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.045120 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.045130 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.045139 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.045148 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.045158 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.045168 | orchestrator | 2025-05-05 00:56:33.045177 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-05 00:56:33.045186 | orchestrator | Monday 05 May 2025 00:44:55 +0000 (0:00:00.904) 0:01:06.123 ************ 2025-05-05 00:56:33.045195 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.045205 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:56:33.045214 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:56:33.045223 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-05 00:56:33.045236 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-05 00:56:33.045245 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-05 00:56:33.045255 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-05 00:56:33.045264 | orchestrator | 2025-05-05 00:56:33.045273 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-05 00:56:33.045282 | orchestrator | Monday 05 May 2025 00:44:56 +0000 (0:00:00.989) 0:01:07.113 ************ 2025-05-05 00:56:33.045291 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.045301 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:56:33.045321 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:56:33.045335 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-05 00:56:33.045344 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-05 00:56:33.045353 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-05 00:56:33.045362 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-05 00:56:33.045372 | orchestrator | 2025-05-05 00:56:33.045381 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-05 00:56:33.045400 | orchestrator | Monday 05 May 2025 00:44:58 +0000 (0:00:01.745) 0:01:08.859 ************ 2025-05-05 00:56:33.045409 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.045418 | orchestrator | 2025-05-05 00:56:33.045427 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-05 00:56:33.045435 | orchestrator | Monday 05 May 2025 00:44:59 +0000 (0:00:01.275) 0:01:10.134 ************ 2025-05-05 00:56:33.045443 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.045451 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.045459 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.045467 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.045475 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.045483 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.045491 | orchestrator | 2025-05-05 00:56:33.045500 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-05 00:56:33.045508 | orchestrator | Monday 05 May 2025 00:45:00 +0000 (0:00:00.804) 0:01:10.939 ************ 2025-05-05 00:56:33.045516 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.045524 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.045532 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.045540 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.045548 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.045556 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.045564 | orchestrator | 2025-05-05 00:56:33.045572 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-05 00:56:33.045581 | orchestrator | Monday 05 May 2025 00:45:02 +0000 (0:00:01.301) 0:01:12.240 ************ 2025-05-05 00:56:33.045589 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.045597 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.045605 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.045613 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.045621 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.045629 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.045637 | orchestrator | 2025-05-05 00:56:33.045645 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-05 00:56:33.045653 | orchestrator | Monday 05 May 2025 00:45:03 +0000 (0:00:01.120) 0:01:13.361 ************ 2025-05-05 00:56:33.045661 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.045670 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.045678 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.045686 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.045694 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.045702 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.045710 | orchestrator | 2025-05-05 00:56:33.045719 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-05 00:56:33.045727 | orchestrator | Monday 05 May 2025 00:45:04 +0000 (0:00:01.133) 0:01:14.494 ************ 2025-05-05 00:56:33.045735 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.045743 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.045802 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.045814 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.045822 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.045841 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.045849 | orchestrator | 2025-05-05 00:56:33.045858 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-05 00:56:33.045866 | orchestrator | Monday 05 May 2025 00:45:05 +0000 (0:00:00.832) 0:01:15.326 ************ 2025-05-05 00:56:33.045874 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.045882 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.045891 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.045899 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.045907 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.045915 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.045923 | orchestrator | 2025-05-05 00:56:33.045931 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-05 00:56:33.045939 | orchestrator | Monday 05 May 2025 00:45:06 +0000 (0:00:01.491) 0:01:16.818 ************ 2025-05-05 00:56:33.045947 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.045955 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.045964 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.045972 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.045980 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.045988 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.045996 | orchestrator | 2025-05-05 00:56:33.046004 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-05 00:56:33.046012 | orchestrator | Monday 05 May 2025 00:45:07 +0000 (0:00:01.098) 0:01:17.916 ************ 2025-05-05 00:56:33.046052 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046060 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046068 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046077 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046085 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046093 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046101 | orchestrator | 2025-05-05 00:56:33.046109 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-05 00:56:33.046117 | orchestrator | Monday 05 May 2025 00:45:08 +0000 (0:00:01.127) 0:01:19.044 ************ 2025-05-05 00:56:33.046125 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046133 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046141 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046149 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046157 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046165 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046173 | orchestrator | 2025-05-05 00:56:33.046181 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-05 00:56:33.046189 | orchestrator | Monday 05 May 2025 00:45:09 +0000 (0:00:00.997) 0:01:20.041 ************ 2025-05-05 00:56:33.046197 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046205 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046213 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046221 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046229 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046237 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046244 | orchestrator | 2025-05-05 00:56:33.046253 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-05 00:56:33.046261 | orchestrator | Monday 05 May 2025 00:45:10 +0000 (0:00:01.065) 0:01:21.107 ************ 2025-05-05 00:56:33.046269 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.046277 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.046285 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.046293 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.046301 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.046345 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.046354 | orchestrator | 2025-05-05 00:56:33.046362 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-05 00:56:33.046376 | orchestrator | Monday 05 May 2025 00:45:12 +0000 (0:00:01.238) 0:01:22.346 ************ 2025-05-05 00:56:33.046384 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046392 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046400 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046408 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046416 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046423 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046430 | orchestrator | 2025-05-05 00:56:33.046439 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-05 00:56:33.046447 | orchestrator | Monday 05 May 2025 00:45:13 +0000 (0:00:01.306) 0:01:23.653 ************ 2025-05-05 00:56:33.046455 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.046463 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.046470 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.046478 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046486 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046495 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046503 | orchestrator | 2025-05-05 00:56:33.046511 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-05 00:56:33.046519 | orchestrator | Monday 05 May 2025 00:45:14 +0000 (0:00:00.626) 0:01:24.279 ************ 2025-05-05 00:56:33.046527 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046535 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046542 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046555 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.046563 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.046571 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.046579 | orchestrator | 2025-05-05 00:56:33.046587 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-05 00:56:33.046595 | orchestrator | Monday 05 May 2025 00:45:14 +0000 (0:00:00.737) 0:01:25.017 ************ 2025-05-05 00:56:33.046603 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046611 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046619 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046627 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.046635 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.046643 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.046651 | orchestrator | 2025-05-05 00:56:33.046659 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-05 00:56:33.046711 | orchestrator | Monday 05 May 2025 00:45:15 +0000 (0:00:00.550) 0:01:25.567 ************ 2025-05-05 00:56:33.046722 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046730 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046738 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046746 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.046754 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.046762 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.046770 | orchestrator | 2025-05-05 00:56:33.046778 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-05 00:56:33.046786 | orchestrator | Monday 05 May 2025 00:45:16 +0000 (0:00:00.690) 0:01:26.257 ************ 2025-05-05 00:56:33.046795 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046802 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046809 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046816 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046823 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046830 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046838 | orchestrator | 2025-05-05 00:56:33.046845 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-05 00:56:33.046852 | orchestrator | Monday 05 May 2025 00:45:16 +0000 (0:00:00.532) 0:01:26.790 ************ 2025-05-05 00:56:33.046859 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.046866 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.046877 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.046885 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046892 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046899 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046906 | orchestrator | 2025-05-05 00:56:33.046913 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-05 00:56:33.046920 | orchestrator | Monday 05 May 2025 00:45:17 +0000 (0:00:00.684) 0:01:27.475 ************ 2025-05-05 00:56:33.046927 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.046934 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.046941 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.046948 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.046955 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.046962 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.046969 | orchestrator | 2025-05-05 00:56:33.046976 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-05 00:56:33.046984 | orchestrator | Monday 05 May 2025 00:45:17 +0000 (0:00:00.526) 0:01:28.002 ************ 2025-05-05 00:56:33.046991 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.046998 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.047005 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.047012 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.047019 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.047027 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.047034 | orchestrator | 2025-05-05 00:56:33.047041 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.047054 | orchestrator | Monday 05 May 2025 00:45:18 +0000 (0:00:00.728) 0:01:28.731 ************ 2025-05-05 00:56:33.047061 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047068 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047075 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047082 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047089 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047096 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047104 | orchestrator | 2025-05-05 00:56:33.047111 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.047129 | orchestrator | Monday 05 May 2025 00:45:19 +0000 (0:00:00.642) 0:01:29.373 ************ 2025-05-05 00:56:33.047137 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047143 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047150 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047157 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047164 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047175 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047182 | orchestrator | 2025-05-05 00:56:33.047189 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.047196 | orchestrator | Monday 05 May 2025 00:45:20 +0000 (0:00:00.911) 0:01:30.285 ************ 2025-05-05 00:56:33.047203 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047210 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047217 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047224 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047231 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047238 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047245 | orchestrator | 2025-05-05 00:56:33.047252 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.047259 | orchestrator | Monday 05 May 2025 00:45:21 +0000 (0:00:00.947) 0:01:31.232 ************ 2025-05-05 00:56:33.047267 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047274 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047281 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047288 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047295 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047302 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047325 | orchestrator | 2025-05-05 00:56:33.047333 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.047340 | orchestrator | Monday 05 May 2025 00:45:21 +0000 (0:00:00.775) 0:01:32.007 ************ 2025-05-05 00:56:33.047347 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047354 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047361 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047368 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047375 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047382 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047388 | orchestrator | 2025-05-05 00:56:33.047395 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.047402 | orchestrator | Monday 05 May 2025 00:45:22 +0000 (0:00:00.606) 0:01:32.614 ************ 2025-05-05 00:56:33.047410 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047416 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047423 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047431 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047438 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047445 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047452 | orchestrator | 2025-05-05 00:56:33.047500 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.047510 | orchestrator | Monday 05 May 2025 00:45:23 +0000 (0:00:00.801) 0:01:33.416 ************ 2025-05-05 00:56:33.047517 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047524 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047531 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047538 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047545 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047552 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047563 | orchestrator | 2025-05-05 00:56:33.047574 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.047586 | orchestrator | Monday 05 May 2025 00:45:23 +0000 (0:00:00.620) 0:01:34.036 ************ 2025-05-05 00:56:33.047597 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047608 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047618 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047629 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047640 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047649 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047659 | orchestrator | 2025-05-05 00:56:33.047669 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.047679 | orchestrator | Monday 05 May 2025 00:45:24 +0000 (0:00:00.793) 0:01:34.829 ************ 2025-05-05 00:56:33.047690 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047701 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047712 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047724 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047736 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047749 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047759 | orchestrator | 2025-05-05 00:56:33.047766 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.047774 | orchestrator | Monday 05 May 2025 00:45:25 +0000 (0:00:00.640) 0:01:35.470 ************ 2025-05-05 00:56:33.047781 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047789 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047796 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047803 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047810 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047817 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047824 | orchestrator | 2025-05-05 00:56:33.047831 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.047844 | orchestrator | Monday 05 May 2025 00:45:26 +0000 (0:00:00.857) 0:01:36.328 ************ 2025-05-05 00:56:33.047851 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047858 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047870 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047877 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047884 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047891 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047898 | orchestrator | 2025-05-05 00:56:33.047905 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.047912 | orchestrator | Monday 05 May 2025 00:45:26 +0000 (0:00:00.683) 0:01:37.012 ************ 2025-05-05 00:56:33.047919 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.047926 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.047933 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.047940 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.047947 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.047954 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.047961 | orchestrator | 2025-05-05 00:56:33.047968 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.047975 | orchestrator | Monday 05 May 2025 00:45:27 +0000 (0:00:00.972) 0:01:37.985 ************ 2025-05-05 00:56:33.047982 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.047989 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.047996 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048003 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.048010 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.048017 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048024 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.048031 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.048038 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048045 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.048052 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.048059 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048066 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.048073 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.048080 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048086 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.048097 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.048104 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048111 | orchestrator | 2025-05-05 00:56:33.048118 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.048125 | orchestrator | Monday 05 May 2025 00:45:28 +0000 (0:00:01.023) 0:01:39.009 ************ 2025-05-05 00:56:33.048132 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-05 00:56:33.048141 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-05 00:56:33.048149 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048157 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-05 00:56:33.048166 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-05 00:56:33.048174 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048182 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-05 00:56:33.048190 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-05 00:56:33.048254 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048265 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-05 00:56:33.048273 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-05 00:56:33.048282 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048294 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-05 00:56:33.048303 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-05 00:56:33.048355 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048364 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-05 00:56:33.048372 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-05 00:56:33.048379 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048388 | orchestrator | 2025-05-05 00:56:33.048396 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.048404 | orchestrator | Monday 05 May 2025 00:45:29 +0000 (0:00:01.138) 0:01:40.147 ************ 2025-05-05 00:56:33.048412 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048425 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048436 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048448 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048459 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048472 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048484 | orchestrator | 2025-05-05 00:56:33.048496 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.048508 | orchestrator | Monday 05 May 2025 00:45:30 +0000 (0:00:01.007) 0:01:41.154 ************ 2025-05-05 00:56:33.048517 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048524 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048531 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048537 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048543 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048549 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048556 | orchestrator | 2025-05-05 00:56:33.048562 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.048569 | orchestrator | Monday 05 May 2025 00:45:32 +0000 (0:00:01.333) 0:01:42.488 ************ 2025-05-05 00:56:33.048575 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048582 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048588 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048594 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048601 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048607 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048613 | orchestrator | 2025-05-05 00:56:33.048636 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.048643 | orchestrator | Monday 05 May 2025 00:45:32 +0000 (0:00:00.636) 0:01:43.125 ************ 2025-05-05 00:56:33.048649 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048656 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048662 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048668 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048674 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048680 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048687 | orchestrator | 2025-05-05 00:56:33.048693 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.048699 | orchestrator | Monday 05 May 2025 00:45:34 +0000 (0:00:01.326) 0:01:44.451 ************ 2025-05-05 00:56:33.048706 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048712 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048718 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048724 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048731 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048741 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048748 | orchestrator | 2025-05-05 00:56:33.048757 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.048763 | orchestrator | Monday 05 May 2025 00:45:35 +0000 (0:00:00.838) 0:01:45.290 ************ 2025-05-05 00:56:33.048775 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048781 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.048788 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.048794 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.048800 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.048806 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.048813 | orchestrator | 2025-05-05 00:56:33.048819 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.048825 | orchestrator | Monday 05 May 2025 00:45:35 +0000 (0:00:00.844) 0:01:46.134 ************ 2025-05-05 00:56:33.048831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.048838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.048844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.048851 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048857 | orchestrator | 2025-05-05 00:56:33.048863 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.048870 | orchestrator | Monday 05 May 2025 00:45:36 +0000 (0:00:00.432) 0:01:46.567 ************ 2025-05-05 00:56:33.048876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.048882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.048888 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.048895 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048901 | orchestrator | 2025-05-05 00:56:33.048907 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.048914 | orchestrator | Monday 05 May 2025 00:45:36 +0000 (0:00:00.402) 0:01:46.970 ************ 2025-05-05 00:56:33.048920 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.048926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.048933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.048987 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.048996 | orchestrator | 2025-05-05 00:56:33.049003 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.049009 | orchestrator | Monday 05 May 2025 00:45:37 +0000 (0:00:00.404) 0:01:47.374 ************ 2025-05-05 00:56:33.049015 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049021 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049028 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049034 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049040 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049046 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049053 | orchestrator | 2025-05-05 00:56:33.049059 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.049065 | orchestrator | Monday 05 May 2025 00:45:37 +0000 (0:00:00.637) 0:01:48.011 ************ 2025-05-05 00:56:33.049072 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.049078 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049084 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.049091 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049097 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.049103 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.049109 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049115 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049122 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.049128 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049134 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.049140 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049147 | orchestrator | 2025-05-05 00:56:33.049153 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.049163 | orchestrator | Monday 05 May 2025 00:45:38 +0000 (0:00:01.081) 0:01:49.093 ************ 2025-05-05 00:56:33.049170 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049176 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049182 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049188 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049194 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049200 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049207 | orchestrator | 2025-05-05 00:56:33.049213 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.049219 | orchestrator | Monday 05 May 2025 00:45:39 +0000 (0:00:00.608) 0:01:49.701 ************ 2025-05-05 00:56:33.049225 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049232 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049238 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049244 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049250 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049257 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049263 | orchestrator | 2025-05-05 00:56:33.049269 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.049275 | orchestrator | Monday 05 May 2025 00:45:40 +0000 (0:00:00.853) 0:01:50.555 ************ 2025-05-05 00:56:33.049282 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.049288 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049294 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.049300 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049319 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.049326 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049332 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.049338 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049344 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.049351 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049357 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.049363 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049369 | orchestrator | 2025-05-05 00:56:33.049375 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.049382 | orchestrator | Monday 05 May 2025 00:45:41 +0000 (0:00:00.739) 0:01:51.294 ************ 2025-05-05 00:56:33.049388 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049394 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049400 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049407 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.049413 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049423 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.049430 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049436 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.049442 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049449 | orchestrator | 2025-05-05 00:56:33.049455 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.049461 | orchestrator | Monday 05 May 2025 00:45:41 +0000 (0:00:00.841) 0:01:52.135 ************ 2025-05-05 00:56:33.049468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.049474 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.049481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.049487 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049497 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-05 00:56:33.049503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-05 00:56:33.049510 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-05 00:56:33.049516 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049525 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-05 00:56:33.049566 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-05 00:56:33.049575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-05 00:56:33.049582 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.049595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.049601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.049607 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049614 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:56:33.049620 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:56:33.049626 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:56:33.049632 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049639 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:56:33.049645 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:56:33.049651 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:56:33.049658 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049668 | orchestrator | 2025-05-05 00:56:33.049675 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.049681 | orchestrator | Monday 05 May 2025 00:45:43 +0000 (0:00:01.561) 0:01:53.697 ************ 2025-05-05 00:56:33.049688 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049694 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049701 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049707 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049713 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049720 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049726 | orchestrator | 2025-05-05 00:56:33.049732 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-05 00:56:33.049738 | orchestrator | Monday 05 May 2025 00:45:44 +0000 (0:00:01.223) 0:01:54.921 ************ 2025-05-05 00:56:33.049744 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049751 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049757 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049763 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.049770 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049776 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.049782 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049788 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.049795 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049801 | orchestrator | 2025-05-05 00:56:33.049807 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-05 00:56:33.049814 | orchestrator | Monday 05 May 2025 00:45:46 +0000 (0:00:01.282) 0:01:56.204 ************ 2025-05-05 00:56:33.049820 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049826 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049833 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049839 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049845 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.049851 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.049858 | orchestrator | 2025-05-05 00:56:33.049864 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-05 00:56:33.049877 | orchestrator | Monday 05 May 2025 00:45:47 +0000 (0:00:01.334) 0:01:57.539 ************ 2025-05-05 00:56:33.049896 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.049978 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.049985 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.049991 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.049998 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.050004 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.050010 | orchestrator | 2025-05-05 00:56:33.050038 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-05 00:56:33.050045 | orchestrator | Monday 05 May 2025 00:45:48 +0000 (0:00:01.405) 0:01:58.944 ************ 2025-05-05 00:56:33.050051 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.050057 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.050063 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.050070 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.050076 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.050082 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.050089 | orchestrator | 2025-05-05 00:56:33.050098 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-05 00:56:33.050104 | orchestrator | Monday 05 May 2025 00:45:50 +0000 (0:00:01.807) 0:02:00.751 ************ 2025-05-05 00:56:33.050111 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.050117 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.050123 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.050129 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.050136 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.050142 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.050148 | orchestrator | 2025-05-05 00:56:33.050154 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-05 00:56:33.050161 | orchestrator | Monday 05 May 2025 00:45:52 +0000 (0:00:01.987) 0:02:02.739 ************ 2025-05-05 00:56:33.050167 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.050175 | orchestrator | 2025-05-05 00:56:33.050181 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-05 00:56:33.050187 | orchestrator | Monday 05 May 2025 00:45:53 +0000 (0:00:01.250) 0:02:03.990 ************ 2025-05-05 00:56:33.050193 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.050199 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.050206 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.050213 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.050221 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.050228 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.050235 | orchestrator | 2025-05-05 00:56:33.050285 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-05 00:56:33.050295 | orchestrator | Monday 05 May 2025 00:45:54 +0000 (0:00:01.005) 0:02:04.995 ************ 2025-05-05 00:56:33.050302 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.050324 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.050331 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.050338 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.050346 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.050353 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.050360 | orchestrator | 2025-05-05 00:56:33.050367 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-05 00:56:33.050374 | orchestrator | Monday 05 May 2025 00:45:55 +0000 (0:00:00.758) 0:02:05.754 ************ 2025-05-05 00:56:33.050381 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-05 00:56:33.050388 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-05 00:56:33.050396 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-05 00:56:33.050408 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-05 00:56:33.050415 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-05 00:56:33.050422 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-05 00:56:33.050429 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-05 00:56:33.050436 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-05 00:56:33.050444 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-05 00:56:33.050451 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-05 00:56:33.050458 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-05 00:56:33.050465 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-05 00:56:33.050472 | orchestrator | 2025-05-05 00:56:33.050479 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-05 00:56:33.050486 | orchestrator | Monday 05 May 2025 00:45:57 +0000 (0:00:01.965) 0:02:07.719 ************ 2025-05-05 00:56:33.050493 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.050500 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.050511 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.050519 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.050526 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.050533 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.050540 | orchestrator | 2025-05-05 00:56:33.050548 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-05 00:56:33.050555 | orchestrator | Monday 05 May 2025 00:45:58 +0000 (0:00:00.967) 0:02:08.687 ************ 2025-05-05 00:56:33.050562 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.050569 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.050576 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.050582 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.050588 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.050594 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.050601 | orchestrator | 2025-05-05 00:56:33.050607 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-05 00:56:33.050613 | orchestrator | Monday 05 May 2025 00:45:59 +0000 (0:00:00.902) 0:02:09.589 ************ 2025-05-05 00:56:33.050620 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.050626 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.050632 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.050639 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.050645 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.050651 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.050657 | orchestrator | 2025-05-05 00:56:33.050666 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-05 00:56:33.050673 | orchestrator | Monday 05 May 2025 00:46:00 +0000 (0:00:00.668) 0:02:10.258 ************ 2025-05-05 00:56:33.050680 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.050686 | orchestrator | 2025-05-05 00:56:33.050692 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-05 00:56:33.050699 | orchestrator | Monday 05 May 2025 00:46:01 +0000 (0:00:01.397) 0:02:11.655 ************ 2025-05-05 00:56:33.050705 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.050711 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.050718 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.050724 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.050730 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.050740 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.050747 | orchestrator | 2025-05-05 00:56:33.050756 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-05 00:56:33.050762 | orchestrator | Monday 05 May 2025 00:46:47 +0000 (0:00:46.341) 0:02:57.997 ************ 2025-05-05 00:56:33.050768 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-05 00:56:33.050775 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-05 00:56:33.050781 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-05 00:56:33.050787 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.050793 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-05 00:56:33.050800 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-05 00:56:33.050843 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-05 00:56:33.050851 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.050858 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-05 00:56:33.050864 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-05 00:56:33.050871 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-05 00:56:33.050877 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.050883 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-05 00:56:33.050890 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-05 00:56:33.050896 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-05 00:56:33.050903 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.050909 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-05 00:56:33.050915 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-05 00:56:33.050921 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-05 00:56:33.050928 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.050934 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-05 00:56:33.050940 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-05 00:56:33.050946 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-05 00:56:33.050953 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.050959 | orchestrator | 2025-05-05 00:56:33.050965 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-05 00:56:33.050971 | orchestrator | Monday 05 May 2025 00:46:48 +0000 (0:00:00.779) 0:02:58.776 ************ 2025-05-05 00:56:33.050978 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.050984 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.050990 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.050997 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051003 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051010 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051016 | orchestrator | 2025-05-05 00:56:33.051022 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-05 00:56:33.051028 | orchestrator | Monday 05 May 2025 00:46:49 +0000 (0:00:00.606) 0:02:59.383 ************ 2025-05-05 00:56:33.051035 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051041 | orchestrator | 2025-05-05 00:56:33.051047 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-05 00:56:33.051053 | orchestrator | Monday 05 May 2025 00:46:49 +0000 (0:00:00.275) 0:02:59.659 ************ 2025-05-05 00:56:33.051060 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051066 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051079 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051085 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051091 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051098 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051104 | orchestrator | 2025-05-05 00:56:33.051111 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-05 00:56:33.051117 | orchestrator | Monday 05 May 2025 00:46:50 +0000 (0:00:00.599) 0:03:00.258 ************ 2025-05-05 00:56:33.051124 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051130 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051136 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051142 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051148 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051165 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051172 | orchestrator | 2025-05-05 00:56:33.051178 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-05 00:56:33.051185 | orchestrator | Monday 05 May 2025 00:46:50 +0000 (0:00:00.703) 0:03:00.962 ************ 2025-05-05 00:56:33.051191 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051197 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051203 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051209 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051219 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051225 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051232 | orchestrator | 2025-05-05 00:56:33.051238 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-05 00:56:33.051247 | orchestrator | Monday 05 May 2025 00:46:51 +0000 (0:00:00.665) 0:03:01.627 ************ 2025-05-05 00:56:33.051254 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.051261 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.051267 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.051273 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.051279 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.051286 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.051292 | orchestrator | 2025-05-05 00:56:33.051298 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-05 00:56:33.051315 | orchestrator | Monday 05 May 2025 00:46:53 +0000 (0:00:01.680) 0:03:03.308 ************ 2025-05-05 00:56:33.051322 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.051328 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.051334 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.051341 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.051347 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.051353 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.051359 | orchestrator | 2025-05-05 00:56:33.051366 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-05 00:56:33.051372 | orchestrator | Monday 05 May 2025 00:46:53 +0000 (0:00:00.645) 0:03:03.953 ************ 2025-05-05 00:56:33.051379 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.051386 | orchestrator | 2025-05-05 00:56:33.051428 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-05 00:56:33.051437 | orchestrator | Monday 05 May 2025 00:46:55 +0000 (0:00:01.381) 0:03:05.334 ************ 2025-05-05 00:56:33.051444 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051450 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051456 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051463 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051469 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051475 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051481 | orchestrator | 2025-05-05 00:56:33.051488 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-05 00:56:33.051494 | orchestrator | Monday 05 May 2025 00:46:56 +0000 (0:00:00.950) 0:03:06.285 ************ 2025-05-05 00:56:33.051505 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051512 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051518 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051524 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051531 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051537 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051543 | orchestrator | 2025-05-05 00:56:33.051549 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-05 00:56:33.051556 | orchestrator | Monday 05 May 2025 00:46:56 +0000 (0:00:00.734) 0:03:07.019 ************ 2025-05-05 00:56:33.051562 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051568 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051574 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051581 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051587 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051593 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051599 | orchestrator | 2025-05-05 00:56:33.051605 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-05 00:56:33.051612 | orchestrator | Monday 05 May 2025 00:46:57 +0000 (0:00:01.033) 0:03:08.053 ************ 2025-05-05 00:56:33.051618 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051624 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051630 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051636 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051642 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051649 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051655 | orchestrator | 2025-05-05 00:56:33.051661 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-05 00:56:33.051667 | orchestrator | Monday 05 May 2025 00:46:58 +0000 (0:00:00.654) 0:03:08.708 ************ 2025-05-05 00:56:33.051673 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051680 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051686 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051692 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051698 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051704 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051710 | orchestrator | 2025-05-05 00:56:33.051716 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-05 00:56:33.051723 | orchestrator | Monday 05 May 2025 00:46:59 +0000 (0:00:00.953) 0:03:09.661 ************ 2025-05-05 00:56:33.051729 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051735 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051741 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051748 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051754 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051760 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051766 | orchestrator | 2025-05-05 00:56:33.051772 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-05 00:56:33.051778 | orchestrator | Monday 05 May 2025 00:47:00 +0000 (0:00:00.679) 0:03:10.341 ************ 2025-05-05 00:56:33.051785 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.051794 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.051800 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.051807 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.051813 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.051819 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.051825 | orchestrator | 2025-05-05 00:56:33.051831 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-05 00:56:33.051838 | orchestrator | Monday 05 May 2025 00:47:01 +0000 (0:00:00.913) 0:03:11.254 ************ 2025-05-05 00:56:33.051844 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.051850 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.051865 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.051871 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.051878 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.051884 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.051890 | orchestrator | 2025-05-05 00:56:33.051896 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.051902 | orchestrator | Monday 05 May 2025 00:47:02 +0000 (0:00:01.537) 0:03:12.792 ************ 2025-05-05 00:56:33.051909 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.051915 | orchestrator | 2025-05-05 00:56:33.051921 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-05 00:56:33.051928 | orchestrator | Monday 05 May 2025 00:47:03 +0000 (0:00:01.391) 0:03:14.183 ************ 2025-05-05 00:56:33.051934 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-05 00:56:33.051940 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-05 00:56:33.051946 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-05 00:56:33.051952 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-05 00:56:33.051958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-05 00:56:33.051965 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-05 00:56:33.051971 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-05 00:56:33.051977 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-05 00:56:33.052015 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-05 00:56:33.052024 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-05 00:56:33.052031 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-05 00:56:33.052037 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-05 00:56:33.052043 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-05 00:56:33.052050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-05 00:56:33.052056 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-05 00:56:33.052062 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-05 00:56:33.052069 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-05 00:56:33.052075 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-05 00:56:33.052081 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-05 00:56:33.052087 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-05 00:56:33.052094 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-05 00:56:33.052100 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-05 00:56:33.052106 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-05 00:56:33.052112 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-05 00:56:33.052118 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-05 00:56:33.052124 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-05 00:56:33.052130 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-05 00:56:33.052137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-05 00:56:33.052143 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-05 00:56:33.052149 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-05 00:56:33.052155 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-05 00:56:33.052161 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-05 00:56:33.052167 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-05 00:56:33.052174 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-05 00:56:33.052184 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-05 00:56:33.052190 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-05 00:56:33.052200 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-05 00:56:33.052206 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-05 00:56:33.052212 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-05 00:56:33.052219 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-05 00:56:33.052225 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-05 00:56:33.052231 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-05 00:56:33.052237 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-05 00:56:33.052243 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-05 00:56:33.052249 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-05 00:56:33.052255 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-05 00:56:33.052262 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-05 00:56:33.052268 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-05 00:56:33.052274 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-05 00:56:33.052280 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-05 00:56:33.052286 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-05 00:56:33.052292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-05 00:56:33.052298 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-05 00:56:33.052350 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-05 00:56:33.052358 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-05 00:56:33.052364 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-05 00:56:33.052370 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-05 00:56:33.052377 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-05 00:56:33.052383 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-05 00:56:33.052389 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-05 00:56:33.052395 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-05 00:56:33.052402 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-05 00:56:33.052408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-05 00:56:33.052414 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-05 00:56:33.052421 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-05 00:56:33.052427 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-05 00:56:33.052433 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-05 00:56:33.052479 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-05 00:56:33.052489 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-05 00:56:33.052495 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-05 00:56:33.052502 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-05 00:56:33.052508 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-05 00:56:33.052514 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-05 00:56:33.052521 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-05 00:56:33.052532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-05 00:56:33.052538 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-05 00:56:33.052545 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-05 00:56:33.052551 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-05 00:56:33.052557 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-05 00:56:33.052564 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-05 00:56:33.052570 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-05 00:56:33.052576 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-05 00:56:33.052583 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-05 00:56:33.052589 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-05 00:56:33.052595 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-05 00:56:33.052601 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-05 00:56:33.052608 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-05 00:56:33.052614 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-05 00:56:33.052620 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-05 00:56:33.052626 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-05 00:56:33.052632 | orchestrator | 2025-05-05 00:56:33.052638 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.052648 | orchestrator | Monday 05 May 2025 00:47:09 +0000 (0:00:05.813) 0:03:19.997 ************ 2025-05-05 00:56:33.052654 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.052661 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.052667 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.052673 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.052680 | orchestrator | 2025-05-05 00:56:33.052686 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-05 00:56:33.052692 | orchestrator | Monday 05 May 2025 00:47:11 +0000 (0:00:01.368) 0:03:21.365 ************ 2025-05-05 00:56:33.052699 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.052705 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.052711 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.052718 | orchestrator | 2025-05-05 00:56:33.052724 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-05 00:56:33.052730 | orchestrator | Monday 05 May 2025 00:47:12 +0000 (0:00:00.921) 0:03:22.287 ************ 2025-05-05 00:56:33.052736 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.052743 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.052749 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.052755 | orchestrator | 2025-05-05 00:56:33.052762 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.052768 | orchestrator | Monday 05 May 2025 00:47:13 +0000 (0:00:01.117) 0:03:23.404 ************ 2025-05-05 00:56:33.052774 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.052780 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.052787 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.052797 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.052803 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.052810 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.052816 | orchestrator | 2025-05-05 00:56:33.052822 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.052828 | orchestrator | Monday 05 May 2025 00:47:13 +0000 (0:00:00.768) 0:03:24.173 ************ 2025-05-05 00:56:33.052835 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.052841 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.052847 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.052853 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.052859 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.052866 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.052872 | orchestrator | 2025-05-05 00:56:33.052879 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.052885 | orchestrator | Monday 05 May 2025 00:47:14 +0000 (0:00:00.636) 0:03:24.809 ************ 2025-05-05 00:56:33.052890 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.052928 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.052936 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.052942 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.052948 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.052955 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.052961 | orchestrator | 2025-05-05 00:56:33.052967 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.052973 | orchestrator | Monday 05 May 2025 00:47:15 +0000 (0:00:00.724) 0:03:25.534 ************ 2025-05-05 00:56:33.052979 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.052985 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.052991 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.052997 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053003 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053009 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053015 | orchestrator | 2025-05-05 00:56:33.053021 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.053027 | orchestrator | Monday 05 May 2025 00:47:15 +0000 (0:00:00.488) 0:03:26.022 ************ 2025-05-05 00:56:33.053033 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053039 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053045 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053051 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053057 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053063 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053069 | orchestrator | 2025-05-05 00:56:33.053075 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.053081 | orchestrator | Monday 05 May 2025 00:47:16 +0000 (0:00:00.621) 0:03:26.643 ************ 2025-05-05 00:56:33.053087 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053093 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053099 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053105 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053111 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053117 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053123 | orchestrator | 2025-05-05 00:56:33.053129 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.053135 | orchestrator | Monday 05 May 2025 00:47:17 +0000 (0:00:00.572) 0:03:27.216 ************ 2025-05-05 00:56:33.053141 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053148 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053157 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053164 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053170 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053176 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053185 | orchestrator | 2025-05-05 00:56:33.053191 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.053197 | orchestrator | Monday 05 May 2025 00:47:17 +0000 (0:00:00.765) 0:03:27.981 ************ 2025-05-05 00:56:33.053205 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053212 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053218 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053223 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053229 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053235 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053243 | orchestrator | 2025-05-05 00:56:33.053262 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.053272 | orchestrator | Monday 05 May 2025 00:47:18 +0000 (0:00:00.583) 0:03:28.565 ************ 2025-05-05 00:56:33.053280 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053289 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053298 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053320 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.053329 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.053341 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.053350 | orchestrator | 2025-05-05 00:56:33.053359 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.053368 | orchestrator | Monday 05 May 2025 00:47:20 +0000 (0:00:02.064) 0:03:30.630 ************ 2025-05-05 00:56:33.053377 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053387 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053396 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053406 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.053416 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.053422 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.053428 | orchestrator | 2025-05-05 00:56:33.053435 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.053441 | orchestrator | Monday 05 May 2025 00:47:21 +0000 (0:00:00.585) 0:03:31.215 ************ 2025-05-05 00:56:33.053447 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.053453 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.053459 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053465 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.053475 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.053481 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053487 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.053493 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.053500 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053511 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.053520 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.053529 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053540 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.053550 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.053559 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053568 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.053577 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.053588 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053598 | orchestrator | 2025-05-05 00:56:33.053609 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.053683 | orchestrator | Monday 05 May 2025 00:47:21 +0000 (0:00:00.761) 0:03:31.976 ************ 2025-05-05 00:56:33.053694 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-05 00:56:33.053705 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-05 00:56:33.053712 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053726 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-05 00:56:33.053733 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-05 00:56:33.053740 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053747 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-05 00:56:33.053754 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-05 00:56:33.053761 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053768 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-05 00:56:33.053775 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-05 00:56:33.053782 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-05 00:56:33.053789 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-05 00:56:33.053796 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-05 00:56:33.053802 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-05 00:56:33.053809 | orchestrator | 2025-05-05 00:56:33.053816 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.053822 | orchestrator | Monday 05 May 2025 00:47:22 +0000 (0:00:00.574) 0:03:32.550 ************ 2025-05-05 00:56:33.053829 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053836 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053843 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053849 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.053856 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.053863 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.053869 | orchestrator | 2025-05-05 00:56:33.053876 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.053883 | orchestrator | Monday 05 May 2025 00:47:23 +0000 (0:00:00.777) 0:03:33.328 ************ 2025-05-05 00:56:33.053890 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053897 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053903 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053910 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053917 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053924 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053931 | orchestrator | 2025-05-05 00:56:33.053938 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.053944 | orchestrator | Monday 05 May 2025 00:47:23 +0000 (0:00:00.551) 0:03:33.879 ************ 2025-05-05 00:56:33.053950 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.053956 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.053961 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.053967 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.053973 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.053979 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.053985 | orchestrator | 2025-05-05 00:56:33.053991 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.053997 | orchestrator | Monday 05 May 2025 00:47:24 +0000 (0:00:00.735) 0:03:34.614 ************ 2025-05-05 00:56:33.054003 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054011 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054035 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054041 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.054047 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.054053 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.054059 | orchestrator | 2025-05-05 00:56:33.054067 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.054073 | orchestrator | Monday 05 May 2025 00:47:25 +0000 (0:00:00.586) 0:03:35.201 ************ 2025-05-05 00:56:33.054079 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054085 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054095 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054101 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.054107 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.054112 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.054118 | orchestrator | 2025-05-05 00:56:33.054124 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.054130 | orchestrator | Monday 05 May 2025 00:47:25 +0000 (0:00:00.698) 0:03:35.900 ************ 2025-05-05 00:56:33.054136 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054142 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054148 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054154 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.054159 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.054165 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.054171 | orchestrator | 2025-05-05 00:56:33.054177 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.054183 | orchestrator | Monday 05 May 2025 00:47:26 +0000 (0:00:00.591) 0:03:36.491 ************ 2025-05-05 00:56:33.054189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.054195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.054201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.054207 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054213 | orchestrator | 2025-05-05 00:56:33.054218 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.054224 | orchestrator | Monday 05 May 2025 00:47:26 +0000 (0:00:00.647) 0:03:37.138 ************ 2025-05-05 00:56:33.054230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.054236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.054242 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.054248 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054254 | orchestrator | 2025-05-05 00:56:33.054299 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.054341 | orchestrator | Monday 05 May 2025 00:47:27 +0000 (0:00:00.364) 0:03:37.502 ************ 2025-05-05 00:56:33.054348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.054354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.054360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.054366 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054372 | orchestrator | 2025-05-05 00:56:33.054378 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.054384 | orchestrator | Monday 05 May 2025 00:47:27 +0000 (0:00:00.416) 0:03:37.919 ************ 2025-05-05 00:56:33.054390 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054396 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054402 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054408 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.054414 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.054420 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.054426 | orchestrator | 2025-05-05 00:56:33.054432 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.054437 | orchestrator | Monday 05 May 2025 00:47:28 +0000 (0:00:00.563) 0:03:38.483 ************ 2025-05-05 00:56:33.054443 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.054449 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054455 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.054462 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.054467 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054473 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054479 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-05 00:56:33.054496 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-05 00:56:33.054502 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-05 00:56:33.054508 | orchestrator | 2025-05-05 00:56:33.054514 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.054520 | orchestrator | Monday 05 May 2025 00:47:29 +0000 (0:00:01.114) 0:03:39.598 ************ 2025-05-05 00:56:33.054526 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054532 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054538 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054544 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.054550 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.054556 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.054562 | orchestrator | 2025-05-05 00:56:33.054568 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.054574 | orchestrator | Monday 05 May 2025 00:47:29 +0000 (0:00:00.550) 0:03:40.148 ************ 2025-05-05 00:56:33.054579 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054586 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054592 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054598 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.054606 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.054616 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.054628 | orchestrator | 2025-05-05 00:56:33.054644 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.054655 | orchestrator | Monday 05 May 2025 00:47:30 +0000 (0:00:00.757) 0:03:40.906 ************ 2025-05-05 00:56:33.054666 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.054676 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054687 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.054716 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054725 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.054731 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054737 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.054745 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.054754 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.054764 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.054772 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.054781 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.054790 | orchestrator | 2025-05-05 00:56:33.054800 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.054810 | orchestrator | Monday 05 May 2025 00:47:31 +0000 (0:00:00.922) 0:03:41.829 ************ 2025-05-05 00:56:33.054820 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054830 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.054835 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.054841 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.054846 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.054852 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.054857 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.054863 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.054868 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.054873 | orchestrator | 2025-05-05 00:56:33.054879 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.054884 | orchestrator | Monday 05 May 2025 00:47:32 +0000 (0:00:01.092) 0:03:42.921 ************ 2025-05-05 00:56:33.054889 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.054900 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.054906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.054911 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.054917 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-05 00:56:33.054986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-05 00:56:33.054997 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-05 00:56:33.055004 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.055010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-05 00:56:33.055016 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-05 00:56:33.055023 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-05 00:56:33.055029 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.055035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.055041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.055046 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:56:33.055052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.055058 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055064 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:56:33.055070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:56:33.055076 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:56:33.055082 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.055088 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:56:33.055094 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:56:33.055100 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.055106 | orchestrator | 2025-05-05 00:56:33.055112 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.055119 | orchestrator | Monday 05 May 2025 00:47:34 +0000 (0:00:01.845) 0:03:44.767 ************ 2025-05-05 00:56:33.055125 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.055131 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.055137 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.055143 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.055149 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.055155 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.055161 | orchestrator | 2025-05-05 00:56:33.055167 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-05 00:56:33.055173 | orchestrator | Monday 05 May 2025 00:47:38 +0000 (0:00:03.963) 0:03:48.731 ************ 2025-05-05 00:56:33.055179 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.055185 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.055191 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.055197 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.055203 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.055209 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.055215 | orchestrator | 2025-05-05 00:56:33.055221 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-05 00:56:33.055227 | orchestrator | Monday 05 May 2025 00:47:39 +0000 (0:00:00.939) 0:03:49.670 ************ 2025-05-05 00:56:33.055233 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055239 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.055246 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.055252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.055259 | orchestrator | 2025-05-05 00:56:33.055265 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-05 00:56:33.055271 | orchestrator | Monday 05 May 2025 00:47:40 +0000 (0:00:01.001) 0:03:50.672 ************ 2025-05-05 00:56:33.055282 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.055288 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.055294 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.055301 | orchestrator | 2025-05-05 00:56:33.055324 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-05 00:56:33.055330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.055336 | orchestrator | 2025-05-05 00:56:33.055341 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-05 00:56:33.055346 | orchestrator | Monday 05 May 2025 00:47:41 +0000 (0:00:01.050) 0:03:51.722 ************ 2025-05-05 00:56:33.055352 | orchestrator | 2025-05-05 00:56:33.055357 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-05 00:56:33.055363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.055368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.055374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.055379 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055385 | orchestrator | 2025-05-05 00:56:33.055390 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-05 00:56:33.055396 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.055401 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.055407 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.055412 | orchestrator | 2025-05-05 00:56:33.055417 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-05 00:56:33.055423 | orchestrator | Monday 05 May 2025 00:47:42 +0000 (0:00:01.286) 0:03:53.008 ************ 2025-05-05 00:56:33.055428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.055436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.055441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.055447 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.055452 | orchestrator | 2025-05-05 00:56:33.055457 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-05 00:56:33.055463 | orchestrator | Monday 05 May 2025 00:47:44 +0000 (0:00:01.286) 0:03:54.295 ************ 2025-05-05 00:56:33.055468 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.055474 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.055479 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.055484 | orchestrator | 2025-05-05 00:56:33.055490 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-05 00:56:33.055532 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055540 | orchestrator | 2025-05-05 00:56:33.055545 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-05 00:56:33.055551 | orchestrator | Monday 05 May 2025 00:47:44 +0000 (0:00:00.609) 0:03:54.904 ************ 2025-05-05 00:56:33.055556 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.055562 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.055567 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.055573 | orchestrator | 2025-05-05 00:56:33.055578 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-05 00:56:33.055584 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055589 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.055594 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.055600 | orchestrator | 2025-05-05 00:56:33.055607 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-05 00:56:33.055615 | orchestrator | Monday 05 May 2025 00:47:45 +0000 (0:00:00.687) 0:03:55.592 ************ 2025-05-05 00:56:33.055624 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.055632 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.055642 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.055651 | orchestrator | 2025-05-05 00:56:33.055662 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-05 00:56:33.055667 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055673 | orchestrator | 2025-05-05 00:56:33.055678 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-05 00:56:33.055683 | orchestrator | Monday 05 May 2025 00:47:46 +0000 (0:00:00.881) 0:03:56.473 ************ 2025-05-05 00:56:33.055689 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.055694 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.055699 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.055708 | orchestrator | 2025-05-05 00:56:33.055713 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-05 00:56:33.055719 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055724 | orchestrator | 2025-05-05 00:56:33.055729 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-05 00:56:33.055735 | orchestrator | Monday 05 May 2025 00:47:47 +0000 (0:00:00.836) 0:03:57.310 ************ 2025-05-05 00:56:33.055740 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055745 | orchestrator | 2025-05-05 00:56:33.055751 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-05 00:56:33.055756 | orchestrator | Monday 05 May 2025 00:47:47 +0000 (0:00:00.145) 0:03:57.456 ************ 2025-05-05 00:56:33.055761 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.055767 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.055772 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.055777 | orchestrator | 2025-05-05 00:56:33.055783 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-05 00:56:33.055788 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055793 | orchestrator | 2025-05-05 00:56:33.055799 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-05 00:56:33.055804 | orchestrator | Monday 05 May 2025 00:47:48 +0000 (0:00:00.759) 0:03:58.215 ************ 2025-05-05 00:56:33.055810 | orchestrator | 2025-05-05 00:56:33.055815 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-05 00:56:33.055820 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.055831 | orchestrator | 2025-05-05 00:56:33.055836 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-05 00:56:33.055841 | orchestrator | Monday 05 May 2025 00:47:48 +0000 (0:00:00.783) 0:03:58.998 ************ 2025-05-05 00:56:33.055847 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.055852 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.055857 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.055863 | orchestrator | 2025-05-05 00:56:33.055868 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-05 00:56:33.055873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.055879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.055884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.055890 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055895 | orchestrator | 2025-05-05 00:56:33.055900 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-05 00:56:33.055908 | orchestrator | Monday 05 May 2025 00:47:49 +0000 (0:00:00.972) 0:03:59.970 ************ 2025-05-05 00:56:33.055914 | orchestrator | 2025-05-05 00:56:33.055919 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-05 00:56:33.055925 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.055930 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.055935 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.055953 | orchestrator | 2025-05-05 00:56:33.055959 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-05 00:56:33.055964 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.055973 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.055978 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.055984 | orchestrator | 2025-05-05 00:56:33.055989 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-05 00:56:33.055994 | orchestrator | Monday 05 May 2025 00:47:51 +0000 (0:00:01.242) 0:04:01.213 ************ 2025-05-05 00:56:33.056000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.056005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.056010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.056016 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056021 | orchestrator | 2025-05-05 00:56:33.056026 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-05 00:56:33.056032 | orchestrator | Monday 05 May 2025 00:47:51 +0000 (0:00:00.905) 0:04:02.118 ************ 2025-05-05 00:56:33.056037 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.056043 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.056048 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.056053 | orchestrator | 2025-05-05 00:56:33.056094 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-05 00:56:33.056102 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.056107 | orchestrator | 2025-05-05 00:56:33.056113 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-05 00:56:33.056118 | orchestrator | Monday 05 May 2025 00:47:52 +0000 (0:00:01.030) 0:04:03.149 ************ 2025-05-05 00:56:33.056124 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.056129 | orchestrator | 2025-05-05 00:56:33.056134 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-05 00:56:33.056140 | orchestrator | Monday 05 May 2025 00:47:53 +0000 (0:00:00.556) 0:04:03.705 ************ 2025-05-05 00:56:33.056145 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056151 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.056156 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.056161 | orchestrator | 2025-05-05 00:56:33.056167 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-05 00:56:33.056172 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.056178 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.056183 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.056188 | orchestrator | 2025-05-05 00:56:33.056194 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-05 00:56:33.056199 | orchestrator | Monday 05 May 2025 00:47:54 +0000 (0:00:00.907) 0:04:04.613 ************ 2025-05-05 00:56:33.056205 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.056210 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.056215 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.056221 | orchestrator | 2025-05-05 00:56:33.056226 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.056232 | orchestrator | Monday 05 May 2025 00:47:55 +0000 (0:00:01.511) 0:04:06.124 ************ 2025-05-05 00:56:33.056237 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.056243 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.056248 | orchestrator | 2025-05-05 00:56:33.056253 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-05 00:56:33.056259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.056264 | orchestrator | 2025-05-05 00:56:33.056270 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.056275 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.056280 | orchestrator | 2025-05-05 00:56:33.056286 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-05 00:56:33.056291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.056300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.056316 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.056322 | orchestrator | 2025-05-05 00:56:33.056327 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-05 00:56:33.056332 | orchestrator | Monday 05 May 2025 00:47:57 +0000 (0:00:01.320) 0:04:07.445 ************ 2025-05-05 00:56:33.056338 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.056343 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.056348 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.056354 | orchestrator | 2025-05-05 00:56:33.056359 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-05 00:56:33.056364 | orchestrator | Monday 05 May 2025 00:47:58 +0000 (0:00:00.897) 0:04:08.342 ************ 2025-05-05 00:56:33.056370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.056375 | orchestrator | 2025-05-05 00:56:33.056380 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-05 00:56:33.056386 | orchestrator | Monday 05 May 2025 00:47:58 +0000 (0:00:00.516) 0:04:08.859 ************ 2025-05-05 00:56:33.056391 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.056396 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.056402 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.056407 | orchestrator | 2025-05-05 00:56:33.056412 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-05 00:56:33.056417 | orchestrator | Monday 05 May 2025 00:47:59 +0000 (0:00:00.403) 0:04:09.263 ************ 2025-05-05 00:56:33.056423 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.056428 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.056433 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.056438 | orchestrator | 2025-05-05 00:56:33.056444 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-05 00:56:33.056449 | orchestrator | Monday 05 May 2025 00:48:00 +0000 (0:00:01.176) 0:04:10.439 ************ 2025-05-05 00:56:33.056454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.056460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.056465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.056471 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.056476 | orchestrator | 2025-05-05 00:56:33.056481 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-05 00:56:33.056487 | orchestrator | Monday 05 May 2025 00:48:00 +0000 (0:00:00.582) 0:04:11.021 ************ 2025-05-05 00:56:33.056492 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.056497 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.056503 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.056508 | orchestrator | 2025-05-05 00:56:33.056516 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-05 00:56:33.056522 | orchestrator | Monday 05 May 2025 00:48:01 +0000 (0:00:00.384) 0:04:11.406 ************ 2025-05-05 00:56:33.056527 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.056533 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.056538 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.056544 | orchestrator | 2025-05-05 00:56:33.056549 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-05 00:56:33.056554 | orchestrator | Monday 05 May 2025 00:48:01 +0000 (0:00:00.711) 0:04:12.117 ************ 2025-05-05 00:56:33.056560 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.056569 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.056606 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.056613 | orchestrator | 2025-05-05 00:56:33.056619 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-05 00:56:33.056624 | orchestrator | Monday 05 May 2025 00:48:02 +0000 (0:00:00.365) 0:04:12.482 ************ 2025-05-05 00:56:33.056630 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.056639 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.056645 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.056650 | orchestrator | 2025-05-05 00:56:33.056655 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.056661 | orchestrator | Monday 05 May 2025 00:48:02 +0000 (0:00:00.347) 0:04:12.830 ************ 2025-05-05 00:56:33.056666 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.056671 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.056677 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.056682 | orchestrator | 2025-05-05 00:56:33.056687 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-05 00:56:33.056693 | orchestrator | 2025-05-05 00:56:33.056698 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-05 00:56:33.056703 | orchestrator | Monday 05 May 2025 00:48:05 +0000 (0:00:02.830) 0:04:15.660 ************ 2025-05-05 00:56:33.056709 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.056714 | orchestrator | 2025-05-05 00:56:33.056720 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-05 00:56:33.056725 | orchestrator | Monday 05 May 2025 00:48:06 +0000 (0:00:00.643) 0:04:16.304 ************ 2025-05-05 00:56:33.056730 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.056736 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.056741 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.056746 | orchestrator | 2025-05-05 00:56:33.056752 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-05 00:56:33.056757 | orchestrator | Monday 05 May 2025 00:48:06 +0000 (0:00:00.807) 0:04:17.111 ************ 2025-05-05 00:56:33.056762 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056768 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.056773 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.056778 | orchestrator | 2025-05-05 00:56:33.056784 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-05 00:56:33.056789 | orchestrator | Monday 05 May 2025 00:48:07 +0000 (0:00:00.537) 0:04:17.648 ************ 2025-05-05 00:56:33.056794 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056800 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.056805 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.056810 | orchestrator | 2025-05-05 00:56:33.056816 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-05 00:56:33.056821 | orchestrator | Monday 05 May 2025 00:48:07 +0000 (0:00:00.362) 0:04:18.010 ************ 2025-05-05 00:56:33.056826 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056832 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.056837 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.056842 | orchestrator | 2025-05-05 00:56:33.056848 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-05 00:56:33.056853 | orchestrator | Monday 05 May 2025 00:48:08 +0000 (0:00:00.330) 0:04:18.341 ************ 2025-05-05 00:56:33.056858 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.056864 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.056869 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.056874 | orchestrator | 2025-05-05 00:56:33.056880 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-05 00:56:33.056885 | orchestrator | Monday 05 May 2025 00:48:08 +0000 (0:00:00.721) 0:04:19.063 ************ 2025-05-05 00:56:33.056890 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056895 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.056901 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.056906 | orchestrator | 2025-05-05 00:56:33.056911 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-05 00:56:33.056925 | orchestrator | Monday 05 May 2025 00:48:09 +0000 (0:00:00.608) 0:04:19.672 ************ 2025-05-05 00:56:33.056931 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056942 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.056947 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.056953 | orchestrator | 2025-05-05 00:56:33.056958 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-05 00:56:33.056964 | orchestrator | Monday 05 May 2025 00:48:09 +0000 (0:00:00.320) 0:04:19.992 ************ 2025-05-05 00:56:33.056969 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.056974 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.056980 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.056985 | orchestrator | 2025-05-05 00:56:33.056990 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-05 00:56:33.056995 | orchestrator | Monday 05 May 2025 00:48:10 +0000 (0:00:00.337) 0:04:20.329 ************ 2025-05-05 00:56:33.057001 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057006 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057011 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057017 | orchestrator | 2025-05-05 00:56:33.057022 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-05 00:56:33.057027 | orchestrator | Monday 05 May 2025 00:48:10 +0000 (0:00:00.363) 0:04:20.693 ************ 2025-05-05 00:56:33.057032 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057038 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057043 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057048 | orchestrator | 2025-05-05 00:56:33.057054 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-05 00:56:33.057062 | orchestrator | Monday 05 May 2025 00:48:11 +0000 (0:00:00.604) 0:04:21.298 ************ 2025-05-05 00:56:33.057067 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.057073 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.057078 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.057083 | orchestrator | 2025-05-05 00:56:33.057089 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-05 00:56:33.057124 | orchestrator | Monday 05 May 2025 00:48:11 +0000 (0:00:00.834) 0:04:22.133 ************ 2025-05-05 00:56:33.057132 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057137 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057143 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057148 | orchestrator | 2025-05-05 00:56:33.057154 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-05 00:56:33.057160 | orchestrator | Monday 05 May 2025 00:48:12 +0000 (0:00:00.328) 0:04:22.461 ************ 2025-05-05 00:56:33.057165 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.057170 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.057176 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.057181 | orchestrator | 2025-05-05 00:56:33.057186 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-05 00:56:33.057191 | orchestrator | Monday 05 May 2025 00:48:12 +0000 (0:00:00.304) 0:04:22.766 ************ 2025-05-05 00:56:33.057197 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057202 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057208 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057216 | orchestrator | 2025-05-05 00:56:33.057222 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-05 00:56:33.057227 | orchestrator | Monday 05 May 2025 00:48:13 +0000 (0:00:00.445) 0:04:23.211 ************ 2025-05-05 00:56:33.057232 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057238 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057243 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057248 | orchestrator | 2025-05-05 00:56:33.057254 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-05 00:56:33.057259 | orchestrator | Monday 05 May 2025 00:48:13 +0000 (0:00:00.295) 0:04:23.506 ************ 2025-05-05 00:56:33.057265 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057270 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057279 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057285 | orchestrator | 2025-05-05 00:56:33.057290 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-05 00:56:33.057295 | orchestrator | Monday 05 May 2025 00:48:13 +0000 (0:00:00.290) 0:04:23.797 ************ 2025-05-05 00:56:33.057301 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057337 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057343 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057348 | orchestrator | 2025-05-05 00:56:33.057354 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-05 00:56:33.057359 | orchestrator | Monday 05 May 2025 00:48:13 +0000 (0:00:00.287) 0:04:24.085 ************ 2025-05-05 00:56:33.057365 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057370 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057375 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057381 | orchestrator | 2025-05-05 00:56:33.057386 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-05 00:56:33.057391 | orchestrator | Monday 05 May 2025 00:48:14 +0000 (0:00:00.465) 0:04:24.551 ************ 2025-05-05 00:56:33.057397 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.057402 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.057407 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.057413 | orchestrator | 2025-05-05 00:56:33.057418 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-05 00:56:33.057423 | orchestrator | Monday 05 May 2025 00:48:14 +0000 (0:00:00.347) 0:04:24.898 ************ 2025-05-05 00:56:33.057429 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.057434 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.057439 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.057445 | orchestrator | 2025-05-05 00:56:33.057450 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.057455 | orchestrator | Monday 05 May 2025 00:48:15 +0000 (0:00:00.302) 0:04:25.200 ************ 2025-05-05 00:56:33.057461 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057466 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057471 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057477 | orchestrator | 2025-05-05 00:56:33.057482 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.057488 | orchestrator | Monday 05 May 2025 00:48:15 +0000 (0:00:00.323) 0:04:25.523 ************ 2025-05-05 00:56:33.057493 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057498 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057504 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057509 | orchestrator | 2025-05-05 00:56:33.057515 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.057520 | orchestrator | Monday 05 May 2025 00:48:15 +0000 (0:00:00.464) 0:04:25.988 ************ 2025-05-05 00:56:33.057525 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057530 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057536 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057541 | orchestrator | 2025-05-05 00:56:33.057546 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.057551 | orchestrator | Monday 05 May 2025 00:48:16 +0000 (0:00:00.299) 0:04:26.287 ************ 2025-05-05 00:56:33.057557 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057562 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057567 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057573 | orchestrator | 2025-05-05 00:56:33.057578 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.057583 | orchestrator | Monday 05 May 2025 00:48:16 +0000 (0:00:00.322) 0:04:26.610 ************ 2025-05-05 00:56:33.057588 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057594 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057599 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057608 | orchestrator | 2025-05-05 00:56:33.057613 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.057619 | orchestrator | Monday 05 May 2025 00:48:16 +0000 (0:00:00.454) 0:04:27.064 ************ 2025-05-05 00:56:33.057624 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057629 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057635 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057640 | orchestrator | 2025-05-05 00:56:33.057645 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.057686 | orchestrator | Monday 05 May 2025 00:48:17 +0000 (0:00:00.271) 0:04:27.336 ************ 2025-05-05 00:56:33.057694 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057700 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057705 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057710 | orchestrator | 2025-05-05 00:56:33.057716 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.057721 | orchestrator | Monday 05 May 2025 00:48:17 +0000 (0:00:00.301) 0:04:27.637 ************ 2025-05-05 00:56:33.057727 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057732 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057737 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057743 | orchestrator | 2025-05-05 00:56:33.057748 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.057754 | orchestrator | Monday 05 May 2025 00:48:17 +0000 (0:00:00.311) 0:04:27.949 ************ 2025-05-05 00:56:33.057759 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057765 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057770 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057775 | orchestrator | 2025-05-05 00:56:33.057780 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.057786 | orchestrator | Monday 05 May 2025 00:48:18 +0000 (0:00:00.449) 0:04:28.398 ************ 2025-05-05 00:56:33.057791 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057797 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057802 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057807 | orchestrator | 2025-05-05 00:56:33.057813 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.057818 | orchestrator | Monday 05 May 2025 00:48:18 +0000 (0:00:00.320) 0:04:28.718 ************ 2025-05-05 00:56:33.057824 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057829 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057834 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057843 | orchestrator | 2025-05-05 00:56:33.057848 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.057852 | orchestrator | Monday 05 May 2025 00:48:18 +0000 (0:00:00.336) 0:04:29.055 ************ 2025-05-05 00:56:33.057857 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057862 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057867 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057872 | orchestrator | 2025-05-05 00:56:33.057877 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.057885 | orchestrator | Monday 05 May 2025 00:48:19 +0000 (0:00:00.289) 0:04:29.344 ************ 2025-05-05 00:56:33.057890 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.057895 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.057900 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057905 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.057909 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.057914 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057919 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.057924 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.057932 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057937 | orchestrator | 2025-05-05 00:56:33.057942 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.057947 | orchestrator | Monday 05 May 2025 00:48:19 +0000 (0:00:00.564) 0:04:29.909 ************ 2025-05-05 00:56:33.057952 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-05 00:56:33.057957 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-05 00:56:33.057962 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.057967 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-05 00:56:33.057972 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-05 00:56:33.057977 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.057982 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-05 00:56:33.057987 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-05 00:56:33.057992 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.057996 | orchestrator | 2025-05-05 00:56:33.058001 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.058006 | orchestrator | Monday 05 May 2025 00:48:20 +0000 (0:00:00.356) 0:04:30.265 ************ 2025-05-05 00:56:33.058011 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058031 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058036 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058041 | orchestrator | 2025-05-05 00:56:33.058046 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.058051 | orchestrator | Monday 05 May 2025 00:48:20 +0000 (0:00:00.257) 0:04:30.523 ************ 2025-05-05 00:56:33.058056 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058060 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058065 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058070 | orchestrator | 2025-05-05 00:56:33.058084 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.058089 | orchestrator | Monday 05 May 2025 00:48:20 +0000 (0:00:00.293) 0:04:30.817 ************ 2025-05-05 00:56:33.058094 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058099 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058104 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058109 | orchestrator | 2025-05-05 00:56:33.058114 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.058118 | orchestrator | Monday 05 May 2025 00:48:21 +0000 (0:00:00.456) 0:04:31.273 ************ 2025-05-05 00:56:33.058123 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058128 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058133 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058138 | orchestrator | 2025-05-05 00:56:33.058173 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.058180 | orchestrator | Monday 05 May 2025 00:48:21 +0000 (0:00:00.299) 0:04:31.573 ************ 2025-05-05 00:56:33.058185 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058190 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058195 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058200 | orchestrator | 2025-05-05 00:56:33.058205 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.058210 | orchestrator | Monday 05 May 2025 00:48:21 +0000 (0:00:00.295) 0:04:31.869 ************ 2025-05-05 00:56:33.058215 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058220 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058225 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058230 | orchestrator | 2025-05-05 00:56:33.058235 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.058240 | orchestrator | Monday 05 May 2025 00:48:21 +0000 (0:00:00.296) 0:04:32.166 ************ 2025-05-05 00:56:33.058250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.058255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.058260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.058278 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058283 | orchestrator | 2025-05-05 00:56:33.058288 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.058293 | orchestrator | Monday 05 May 2025 00:48:22 +0000 (0:00:00.581) 0:04:32.747 ************ 2025-05-05 00:56:33.058298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.058303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.058321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.058326 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058331 | orchestrator | 2025-05-05 00:56:33.058336 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.058340 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:00.681) 0:04:33.428 ************ 2025-05-05 00:56:33.058346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.058351 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.058356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.058360 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058365 | orchestrator | 2025-05-05 00:56:33.058370 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.058375 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:00.381) 0:04:33.809 ************ 2025-05-05 00:56:33.058380 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058385 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058390 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058394 | orchestrator | 2025-05-05 00:56:33.058399 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.058406 | orchestrator | Monday 05 May 2025 00:48:23 +0000 (0:00:00.297) 0:04:34.106 ************ 2025-05-05 00:56:33.058412 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.058417 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058421 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.058426 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058431 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.058436 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058441 | orchestrator | 2025-05-05 00:56:33.058446 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.058451 | orchestrator | Monday 05 May 2025 00:48:24 +0000 (0:00:00.417) 0:04:34.524 ************ 2025-05-05 00:56:33.058456 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058461 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058465 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058470 | orchestrator | 2025-05-05 00:56:33.058475 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.058480 | orchestrator | Monday 05 May 2025 00:48:24 +0000 (0:00:00.310) 0:04:34.834 ************ 2025-05-05 00:56:33.058485 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058490 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058495 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058500 | orchestrator | 2025-05-05 00:56:33.058505 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.058510 | orchestrator | Monday 05 May 2025 00:48:25 +0000 (0:00:00.481) 0:04:35.315 ************ 2025-05-05 00:56:33.058515 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.058520 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058524 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.058534 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058539 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.058544 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058549 | orchestrator | 2025-05-05 00:56:33.058554 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.058559 | orchestrator | Monday 05 May 2025 00:48:25 +0000 (0:00:00.423) 0:04:35.739 ************ 2025-05-05 00:56:33.058564 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058569 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058573 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058578 | orchestrator | 2025-05-05 00:56:33.058583 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.058588 | orchestrator | Monday 05 May 2025 00:48:25 +0000 (0:00:00.297) 0:04:36.037 ************ 2025-05-05 00:56:33.058593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.058598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.058603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.058608 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-05 00:56:33.058642 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-05 00:56:33.058649 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-05 00:56:33.058654 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058660 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058665 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-05 00:56:33.058672 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-05 00:56:33.058677 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-05 00:56:33.058682 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058692 | orchestrator | 2025-05-05 00:56:33.058697 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.058702 | orchestrator | Monday 05 May 2025 00:48:26 +0000 (0:00:00.755) 0:04:36.792 ************ 2025-05-05 00:56:33.058707 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058712 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058716 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058721 | orchestrator | 2025-05-05 00:56:33.058726 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-05 00:56:33.058731 | orchestrator | Monday 05 May 2025 00:48:27 +0000 (0:00:00.580) 0:04:37.373 ************ 2025-05-05 00:56:33.058736 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058741 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058746 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058751 | orchestrator | 2025-05-05 00:56:33.058756 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-05 00:56:33.058761 | orchestrator | Monday 05 May 2025 00:48:28 +0000 (0:00:00.836) 0:04:38.210 ************ 2025-05-05 00:56:33.058765 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058770 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058775 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058780 | orchestrator | 2025-05-05 00:56:33.058785 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-05 00:56:33.058790 | orchestrator | Monday 05 May 2025 00:48:28 +0000 (0:00:00.572) 0:04:38.783 ************ 2025-05-05 00:56:33.058795 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058800 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.058804 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.058809 | orchestrator | 2025-05-05 00:56:33.058814 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-05 00:56:33.058819 | orchestrator | Monday 05 May 2025 00:48:29 +0000 (0:00:00.810) 0:04:39.594 ************ 2025-05-05 00:56:33.058824 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.058833 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.058838 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.058843 | orchestrator | 2025-05-05 00:56:33.058847 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-05 00:56:33.058852 | orchestrator | Monday 05 May 2025 00:48:29 +0000 (0:00:00.388) 0:04:39.982 ************ 2025-05-05 00:56:33.058857 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.058862 | orchestrator | 2025-05-05 00:56:33.058867 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-05 00:56:33.058872 | orchestrator | Monday 05 May 2025 00:48:30 +0000 (0:00:00.859) 0:04:40.842 ************ 2025-05-05 00:56:33.058877 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.058882 | orchestrator | 2025-05-05 00:56:33.058886 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-05 00:56:33.058891 | orchestrator | Monday 05 May 2025 00:48:30 +0000 (0:00:00.150) 0:04:40.993 ************ 2025-05-05 00:56:33.058896 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-05 00:56:33.058901 | orchestrator | 2025-05-05 00:56:33.058906 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-05 00:56:33.058911 | orchestrator | Monday 05 May 2025 00:48:31 +0000 (0:00:00.610) 0:04:41.603 ************ 2025-05-05 00:56:33.058915 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.058920 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.058925 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.058930 | orchestrator | 2025-05-05 00:56:33.058935 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-05 00:56:33.058940 | orchestrator | Monday 05 May 2025 00:48:31 +0000 (0:00:00.379) 0:04:41.983 ************ 2025-05-05 00:56:33.058945 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.058950 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.058954 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.058959 | orchestrator | 2025-05-05 00:56:33.058964 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-05 00:56:33.058971 | orchestrator | Monday 05 May 2025 00:48:32 +0000 (0:00:00.335) 0:04:42.319 ************ 2025-05-05 00:56:33.058976 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.058981 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.058986 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.058991 | orchestrator | 2025-05-05 00:56:33.059003 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-05 00:56:33.059008 | orchestrator | Monday 05 May 2025 00:48:33 +0000 (0:00:01.284) 0:04:43.604 ************ 2025-05-05 00:56:33.059013 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059018 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059023 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059028 | orchestrator | 2025-05-05 00:56:33.059033 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-05 00:56:33.059038 | orchestrator | Monday 05 May 2025 00:48:34 +0000 (0:00:00.769) 0:04:44.373 ************ 2025-05-05 00:56:33.059043 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059048 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059053 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059057 | orchestrator | 2025-05-05 00:56:33.059062 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-05 00:56:33.059067 | orchestrator | Monday 05 May 2025 00:48:34 +0000 (0:00:00.624) 0:04:44.997 ************ 2025-05-05 00:56:33.059072 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059077 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.059082 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.059087 | orchestrator | 2025-05-05 00:56:33.059122 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-05 00:56:33.059129 | orchestrator | Monday 05 May 2025 00:48:35 +0000 (0:00:00.634) 0:04:45.632 ************ 2025-05-05 00:56:33.059134 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059144 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.059149 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.059154 | orchestrator | 2025-05-05 00:56:33.059159 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-05 00:56:33.059164 | orchestrator | Monday 05 May 2025 00:48:35 +0000 (0:00:00.433) 0:04:46.066 ************ 2025-05-05 00:56:33.059169 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059174 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.059179 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.059184 | orchestrator | 2025-05-05 00:56:33.059189 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-05 00:56:33.059194 | orchestrator | Monday 05 May 2025 00:48:36 +0000 (0:00:00.301) 0:04:46.367 ************ 2025-05-05 00:56:33.059199 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059204 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.059209 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.059214 | orchestrator | 2025-05-05 00:56:33.059219 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-05 00:56:33.059224 | orchestrator | Monday 05 May 2025 00:48:36 +0000 (0:00:00.295) 0:04:46.663 ************ 2025-05-05 00:56:33.059229 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059234 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.059239 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.059244 | orchestrator | 2025-05-05 00:56:33.059249 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-05 00:56:33.059254 | orchestrator | Monday 05 May 2025 00:48:36 +0000 (0:00:00.297) 0:04:46.961 ************ 2025-05-05 00:56:33.059259 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059264 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059269 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059274 | orchestrator | 2025-05-05 00:56:33.059279 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-05 00:56:33.059284 | orchestrator | Monday 05 May 2025 00:48:38 +0000 (0:00:01.376) 0:04:48.338 ************ 2025-05-05 00:56:33.059289 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059294 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.059299 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.059318 | orchestrator | 2025-05-05 00:56:33.059323 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-05 00:56:33.059329 | orchestrator | Monday 05 May 2025 00:48:38 +0000 (0:00:00.276) 0:04:48.614 ************ 2025-05-05 00:56:33.059334 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.059339 | orchestrator | 2025-05-05 00:56:33.059344 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-05 00:56:33.059348 | orchestrator | Monday 05 May 2025 00:48:38 +0000 (0:00:00.494) 0:04:49.109 ************ 2025-05-05 00:56:33.059353 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059358 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.059363 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.059368 | orchestrator | 2025-05-05 00:56:33.059373 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-05 00:56:33.059378 | orchestrator | Monday 05 May 2025 00:48:39 +0000 (0:00:00.395) 0:04:49.504 ************ 2025-05-05 00:56:33.059383 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059388 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.059393 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.059397 | orchestrator | 2025-05-05 00:56:33.059402 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-05 00:56:33.059407 | orchestrator | Monday 05 May 2025 00:48:39 +0000 (0:00:00.257) 0:04:49.761 ************ 2025-05-05 00:56:33.059412 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.059417 | orchestrator | 2025-05-05 00:56:33.059425 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-05 00:56:33.059430 | orchestrator | Monday 05 May 2025 00:48:40 +0000 (0:00:00.508) 0:04:50.270 ************ 2025-05-05 00:56:33.059435 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059440 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059445 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059450 | orchestrator | 2025-05-05 00:56:33.059455 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-05 00:56:33.059460 | orchestrator | Monday 05 May 2025 00:48:41 +0000 (0:00:01.288) 0:04:51.558 ************ 2025-05-05 00:56:33.059465 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059470 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059475 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059480 | orchestrator | 2025-05-05 00:56:33.059484 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-05 00:56:33.059492 | orchestrator | Monday 05 May 2025 00:48:42 +0000 (0:00:01.086) 0:04:52.644 ************ 2025-05-05 00:56:33.059497 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059502 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059507 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059512 | orchestrator | 2025-05-05 00:56:33.059516 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-05 00:56:33.059521 | orchestrator | Monday 05 May 2025 00:48:44 +0000 (0:00:01.583) 0:04:54.228 ************ 2025-05-05 00:56:33.059526 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059531 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059536 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059541 | orchestrator | 2025-05-05 00:56:33.059546 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-05 00:56:33.059551 | orchestrator | Monday 05 May 2025 00:48:46 +0000 (0:00:02.172) 0:04:56.401 ************ 2025-05-05 00:56:33.059556 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.059561 | orchestrator | 2025-05-05 00:56:33.059593 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-05 00:56:33.059600 | orchestrator | Monday 05 May 2025 00:48:46 +0000 (0:00:00.637) 0:04:57.038 ************ 2025-05-05 00:56:33.059605 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-05 00:56:33.059610 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059615 | orchestrator | 2025-05-05 00:56:33.059620 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-05 00:56:33.059625 | orchestrator | Monday 05 May 2025 00:49:08 +0000 (0:00:21.447) 0:05:18.486 ************ 2025-05-05 00:56:33.059630 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059635 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.059640 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.059645 | orchestrator | 2025-05-05 00:56:33.059650 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-05 00:56:33.059655 | orchestrator | Monday 05 May 2025 00:49:22 +0000 (0:00:13.728) 0:05:32.214 ************ 2025-05-05 00:56:33.059661 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059665 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.059670 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.059675 | orchestrator | 2025-05-05 00:56:33.059680 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-05 00:56:33.059685 | orchestrator | Monday 05 May 2025 00:49:23 +0000 (0:00:01.099) 0:05:33.314 ************ 2025-05-05 00:56:33.059690 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059695 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059700 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059705 | orchestrator | 2025-05-05 00:56:33.059710 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-05 00:56:33.059719 | orchestrator | Monday 05 May 2025 00:49:23 +0000 (0:00:00.841) 0:05:34.156 ************ 2025-05-05 00:56:33.059724 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.059729 | orchestrator | 2025-05-05 00:56:33.059734 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-05 00:56:33.059739 | orchestrator | Monday 05 May 2025 00:49:24 +0000 (0:00:00.553) 0:05:34.709 ************ 2025-05-05 00:56:33.059744 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059749 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.059754 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.059759 | orchestrator | 2025-05-05 00:56:33.059763 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-05 00:56:33.059768 | orchestrator | Monday 05 May 2025 00:49:24 +0000 (0:00:00.340) 0:05:35.050 ************ 2025-05-05 00:56:33.059773 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059778 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059783 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059788 | orchestrator | 2025-05-05 00:56:33.059793 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-05 00:56:33.059798 | orchestrator | Monday 05 May 2025 00:49:26 +0000 (0:00:01.529) 0:05:36.579 ************ 2025-05-05 00:56:33.059803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.059808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.059813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.059818 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059823 | orchestrator | 2025-05-05 00:56:33.059828 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-05 00:56:33.059832 | orchestrator | Monday 05 May 2025 00:49:27 +0000 (0:00:00.717) 0:05:37.296 ************ 2025-05-05 00:56:33.059837 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059844 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.059852 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.059859 | orchestrator | 2025-05-05 00:56:33.059867 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.059875 | orchestrator | Monday 05 May 2025 00:49:27 +0000 (0:00:00.346) 0:05:37.642 ************ 2025-05-05 00:56:33.059883 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.059890 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.059898 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.059905 | orchestrator | 2025-05-05 00:56:33.059913 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-05 00:56:33.059920 | orchestrator | 2025-05-05 00:56:33.059927 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-05 00:56:33.059939 | orchestrator | Monday 05 May 2025 00:49:29 +0000 (0:00:02.165) 0:05:39.808 ************ 2025-05-05 00:56:33.059945 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.059950 | orchestrator | 2025-05-05 00:56:33.059955 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-05 00:56:33.059960 | orchestrator | Monday 05 May 2025 00:49:30 +0000 (0:00:00.534) 0:05:40.342 ************ 2025-05-05 00:56:33.059965 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.059970 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.059974 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.059979 | orchestrator | 2025-05-05 00:56:33.059984 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-05 00:56:33.059989 | orchestrator | Monday 05 May 2025 00:49:30 +0000 (0:00:00.703) 0:05:41.046 ************ 2025-05-05 00:56:33.059994 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.059999 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060004 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060009 | orchestrator | 2025-05-05 00:56:33.060021 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-05 00:56:33.060026 | orchestrator | Monday 05 May 2025 00:49:31 +0000 (0:00:00.518) 0:05:41.564 ************ 2025-05-05 00:56:33.060031 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060036 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060041 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060048 | orchestrator | 2025-05-05 00:56:33.060084 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-05 00:56:33.060092 | orchestrator | Monday 05 May 2025 00:49:31 +0000 (0:00:00.323) 0:05:41.888 ************ 2025-05-05 00:56:33.060097 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060102 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060107 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060111 | orchestrator | 2025-05-05 00:56:33.060116 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-05 00:56:33.060121 | orchestrator | Monday 05 May 2025 00:49:32 +0000 (0:00:00.337) 0:05:42.226 ************ 2025-05-05 00:56:33.060126 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.060131 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.060136 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.060141 | orchestrator | 2025-05-05 00:56:33.060146 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-05 00:56:33.060151 | orchestrator | Monday 05 May 2025 00:49:32 +0000 (0:00:00.701) 0:05:42.927 ************ 2025-05-05 00:56:33.060155 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060160 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060165 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060170 | orchestrator | 2025-05-05 00:56:33.060175 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-05 00:56:33.060180 | orchestrator | Monday 05 May 2025 00:49:33 +0000 (0:00:00.610) 0:05:43.537 ************ 2025-05-05 00:56:33.060184 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060189 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060194 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060199 | orchestrator | 2025-05-05 00:56:33.060204 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-05 00:56:33.060209 | orchestrator | Monday 05 May 2025 00:49:33 +0000 (0:00:00.379) 0:05:43.916 ************ 2025-05-05 00:56:33.060214 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060219 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060223 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060228 | orchestrator | 2025-05-05 00:56:33.060233 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-05 00:56:33.060238 | orchestrator | Monday 05 May 2025 00:49:34 +0000 (0:00:00.352) 0:05:44.268 ************ 2025-05-05 00:56:33.060243 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060248 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060253 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060258 | orchestrator | 2025-05-05 00:56:33.060263 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-05 00:56:33.060268 | orchestrator | Monday 05 May 2025 00:49:34 +0000 (0:00:00.340) 0:05:44.609 ************ 2025-05-05 00:56:33.060273 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060278 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060283 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060288 | orchestrator | 2025-05-05 00:56:33.060292 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-05 00:56:33.060297 | orchestrator | Monday 05 May 2025 00:49:35 +0000 (0:00:00.594) 0:05:45.204 ************ 2025-05-05 00:56:33.060302 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.060339 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.060344 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.060349 | orchestrator | 2025-05-05 00:56:33.060354 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-05 00:56:33.060363 | orchestrator | Monday 05 May 2025 00:49:35 +0000 (0:00:00.812) 0:05:46.017 ************ 2025-05-05 00:56:33.060368 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060373 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060378 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060382 | orchestrator | 2025-05-05 00:56:33.060387 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-05 00:56:33.060392 | orchestrator | Monday 05 May 2025 00:49:36 +0000 (0:00:00.382) 0:05:46.400 ************ 2025-05-05 00:56:33.060397 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.060402 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.060407 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.060412 | orchestrator | 2025-05-05 00:56:33.060417 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-05 00:56:33.060422 | orchestrator | Monday 05 May 2025 00:49:36 +0000 (0:00:00.452) 0:05:46.853 ************ 2025-05-05 00:56:33.060427 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060431 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060436 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060441 | orchestrator | 2025-05-05 00:56:33.060446 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-05 00:56:33.060451 | orchestrator | Monday 05 May 2025 00:49:37 +0000 (0:00:00.818) 0:05:47.671 ************ 2025-05-05 00:56:33.060456 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060461 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060466 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060471 | orchestrator | 2025-05-05 00:56:33.060476 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-05 00:56:33.060480 | orchestrator | Monday 05 May 2025 00:49:37 +0000 (0:00:00.350) 0:05:48.021 ************ 2025-05-05 00:56:33.060485 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060490 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060495 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060500 | orchestrator | 2025-05-05 00:56:33.060505 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-05 00:56:33.060510 | orchestrator | Monday 05 May 2025 00:49:38 +0000 (0:00:00.378) 0:05:48.400 ************ 2025-05-05 00:56:33.060514 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060519 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060524 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060529 | orchestrator | 2025-05-05 00:56:33.060534 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-05 00:56:33.060541 | orchestrator | Monday 05 May 2025 00:49:38 +0000 (0:00:00.306) 0:05:48.706 ************ 2025-05-05 00:56:33.060547 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060551 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060556 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060561 | orchestrator | 2025-05-05 00:56:33.060595 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-05 00:56:33.060602 | orchestrator | Monday 05 May 2025 00:49:39 +0000 (0:00:00.723) 0:05:49.429 ************ 2025-05-05 00:56:33.060607 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.060612 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.060617 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.060622 | orchestrator | 2025-05-05 00:56:33.060627 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-05 00:56:33.060632 | orchestrator | Monday 05 May 2025 00:49:39 +0000 (0:00:00.424) 0:05:49.854 ************ 2025-05-05 00:56:33.060637 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.060642 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.060647 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.060655 | orchestrator | 2025-05-05 00:56:33.060660 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.060665 | orchestrator | Monday 05 May 2025 00:49:40 +0000 (0:00:00.412) 0:05:50.267 ************ 2025-05-05 00:56:33.060673 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060678 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060683 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060688 | orchestrator | 2025-05-05 00:56:33.060693 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.060698 | orchestrator | Monday 05 May 2025 00:49:40 +0000 (0:00:00.373) 0:05:50.640 ************ 2025-05-05 00:56:33.060703 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060708 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060713 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060717 | orchestrator | 2025-05-05 00:56:33.060722 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.060728 | orchestrator | Monday 05 May 2025 00:49:41 +0000 (0:00:00.599) 0:05:51.239 ************ 2025-05-05 00:56:33.060733 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060738 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060743 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060752 | orchestrator | 2025-05-05 00:56:33.060757 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.060762 | orchestrator | Monday 05 May 2025 00:49:41 +0000 (0:00:00.293) 0:05:51.533 ************ 2025-05-05 00:56:33.060767 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060772 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060777 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060782 | orchestrator | 2025-05-05 00:56:33.060787 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.060792 | orchestrator | Monday 05 May 2025 00:49:41 +0000 (0:00:00.293) 0:05:51.826 ************ 2025-05-05 00:56:33.060797 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060801 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060806 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060811 | orchestrator | 2025-05-05 00:56:33.060816 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.060821 | orchestrator | Monday 05 May 2025 00:49:42 +0000 (0:00:00.441) 0:05:52.267 ************ 2025-05-05 00:56:33.060826 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060831 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060836 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060841 | orchestrator | 2025-05-05 00:56:33.060846 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.060850 | orchestrator | Monday 05 May 2025 00:49:42 +0000 (0:00:00.245) 0:05:52.513 ************ 2025-05-05 00:56:33.060855 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060861 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060865 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060870 | orchestrator | 2025-05-05 00:56:33.060875 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.060880 | orchestrator | Monday 05 May 2025 00:49:42 +0000 (0:00:00.267) 0:05:52.781 ************ 2025-05-05 00:56:33.060885 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060890 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060895 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060900 | orchestrator | 2025-05-05 00:56:33.060905 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.060910 | orchestrator | Monday 05 May 2025 00:49:42 +0000 (0:00:00.300) 0:05:53.082 ************ 2025-05-05 00:56:33.060915 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060920 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060925 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060938 | orchestrator | 2025-05-05 00:56:33.060943 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.060948 | orchestrator | Monday 05 May 2025 00:49:43 +0000 (0:00:00.419) 0:05:53.501 ************ 2025-05-05 00:56:33.060957 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060961 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060966 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.060971 | orchestrator | 2025-05-05 00:56:33.060976 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.060981 | orchestrator | Monday 05 May 2025 00:49:43 +0000 (0:00:00.254) 0:05:53.756 ************ 2025-05-05 00:56:33.060986 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.060991 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.060996 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061001 | orchestrator | 2025-05-05 00:56:33.061006 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.061011 | orchestrator | Monday 05 May 2025 00:49:43 +0000 (0:00:00.247) 0:05:54.003 ************ 2025-05-05 00:56:33.061015 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061020 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061025 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061030 | orchestrator | 2025-05-05 00:56:33.061035 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.061040 | orchestrator | Monday 05 May 2025 00:49:44 +0000 (0:00:00.254) 0:05:54.258 ************ 2025-05-05 00:56:33.061072 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.061079 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.061085 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061090 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.061094 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.061099 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061104 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.061109 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.061114 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061119 | orchestrator | 2025-05-05 00:56:33.061123 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.061128 | orchestrator | Monday 05 May 2025 00:49:44 +0000 (0:00:00.431) 0:05:54.689 ************ 2025-05-05 00:56:33.061133 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-05 00:56:33.061138 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-05 00:56:33.061143 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061148 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-05 00:56:33.061153 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-05 00:56:33.061158 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061163 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-05 00:56:33.061167 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-05 00:56:33.061172 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061177 | orchestrator | 2025-05-05 00:56:33.061182 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.061187 | orchestrator | Monday 05 May 2025 00:49:44 +0000 (0:00:00.325) 0:05:55.014 ************ 2025-05-05 00:56:33.061192 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061197 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061201 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061206 | orchestrator | 2025-05-05 00:56:33.061211 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.061219 | orchestrator | Monday 05 May 2025 00:49:45 +0000 (0:00:00.292) 0:05:55.307 ************ 2025-05-05 00:56:33.061223 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061228 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061233 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061238 | orchestrator | 2025-05-05 00:56:33.061246 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.061251 | orchestrator | Monday 05 May 2025 00:49:45 +0000 (0:00:00.296) 0:05:55.604 ************ 2025-05-05 00:56:33.061256 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061261 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061266 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061274 | orchestrator | 2025-05-05 00:56:33.061279 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.061283 | orchestrator | Monday 05 May 2025 00:49:45 +0000 (0:00:00.484) 0:05:56.089 ************ 2025-05-05 00:56:33.061288 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061293 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061298 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061303 | orchestrator | 2025-05-05 00:56:33.061322 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.061327 | orchestrator | Monday 05 May 2025 00:49:46 +0000 (0:00:00.302) 0:05:56.391 ************ 2025-05-05 00:56:33.061332 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061337 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061342 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061347 | orchestrator | 2025-05-05 00:56:33.061352 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.061357 | orchestrator | Monday 05 May 2025 00:49:46 +0000 (0:00:00.294) 0:05:56.686 ************ 2025-05-05 00:56:33.061362 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061367 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061371 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061376 | orchestrator | 2025-05-05 00:56:33.061381 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.061386 | orchestrator | Monday 05 May 2025 00:49:46 +0000 (0:00:00.296) 0:05:56.982 ************ 2025-05-05 00:56:33.061391 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.061396 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.061401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.061406 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061411 | orchestrator | 2025-05-05 00:56:33.061416 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.061423 | orchestrator | Monday 05 May 2025 00:49:47 +0000 (0:00:00.545) 0:05:57.528 ************ 2025-05-05 00:56:33.061431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.061439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.061446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.061453 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061460 | orchestrator | 2025-05-05 00:56:33.061467 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.061474 | orchestrator | Monday 05 May 2025 00:49:48 +0000 (0:00:00.738) 0:05:58.266 ************ 2025-05-05 00:56:33.061482 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.061489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.061497 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.061504 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061513 | orchestrator | 2025-05-05 00:56:33.061518 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.061558 | orchestrator | Monday 05 May 2025 00:49:48 +0000 (0:00:00.380) 0:05:58.646 ************ 2025-05-05 00:56:33.061565 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061570 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061575 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061580 | orchestrator | 2025-05-05 00:56:33.061592 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.061597 | orchestrator | Monday 05 May 2025 00:49:48 +0000 (0:00:00.350) 0:05:58.997 ************ 2025-05-05 00:56:33.061602 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.061607 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061612 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.061617 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061622 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.061626 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061631 | orchestrator | 2025-05-05 00:56:33.061636 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.061641 | orchestrator | Monday 05 May 2025 00:49:49 +0000 (0:00:00.438) 0:05:59.436 ************ 2025-05-05 00:56:33.061646 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061651 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061656 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061661 | orchestrator | 2025-05-05 00:56:33.061666 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.061671 | orchestrator | Monday 05 May 2025 00:49:49 +0000 (0:00:00.300) 0:05:59.736 ************ 2025-05-05 00:56:33.061676 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061681 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061686 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061690 | orchestrator | 2025-05-05 00:56:33.061695 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.061700 | orchestrator | Monday 05 May 2025 00:49:50 +0000 (0:00:00.469) 0:06:00.206 ************ 2025-05-05 00:56:33.061705 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.061710 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061715 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.061720 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061725 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.061730 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061735 | orchestrator | 2025-05-05 00:56:33.061740 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.061745 | orchestrator | Monday 05 May 2025 00:49:50 +0000 (0:00:00.472) 0:06:00.678 ************ 2025-05-05 00:56:33.061750 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061754 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061759 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061764 | orchestrator | 2025-05-05 00:56:33.061769 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.061774 | orchestrator | Monday 05 May 2025 00:49:50 +0000 (0:00:00.276) 0:06:00.955 ************ 2025-05-05 00:56:33.061779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.061784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.061789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.061794 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-05 00:56:33.061804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-05 00:56:33.061808 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-05 00:56:33.061814 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061819 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-05 00:56:33.061823 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-05 00:56:33.061828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-05 00:56:33.061834 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061838 | orchestrator | 2025-05-05 00:56:33.061843 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.061852 | orchestrator | Monday 05 May 2025 00:49:51 +0000 (0:00:00.834) 0:06:01.790 ************ 2025-05-05 00:56:33.061857 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061862 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061867 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061872 | orchestrator | 2025-05-05 00:56:33.061876 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-05 00:56:33.061881 | orchestrator | Monday 05 May 2025 00:49:52 +0000 (0:00:00.564) 0:06:02.354 ************ 2025-05-05 00:56:33.061886 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061891 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061896 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061901 | orchestrator | 2025-05-05 00:56:33.061908 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-05 00:56:33.061913 | orchestrator | Monday 05 May 2025 00:49:52 +0000 (0:00:00.771) 0:06:03.126 ************ 2025-05-05 00:56:33.061918 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061923 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061928 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061942 | orchestrator | 2025-05-05 00:56:33.061947 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-05 00:56:33.061952 | orchestrator | Monday 05 May 2025 00:49:53 +0000 (0:00:00.557) 0:06:03.684 ************ 2025-05-05 00:56:33.061957 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.061962 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.061967 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.061971 | orchestrator | 2025-05-05 00:56:33.061976 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-05 00:56:33.061981 | orchestrator | Monday 05 May 2025 00:49:54 +0000 (0:00:00.754) 0:06:04.439 ************ 2025-05-05 00:56:33.061986 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.062040 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:56:33.062049 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:56:33.062054 | orchestrator | 2025-05-05 00:56:33.062059 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-05 00:56:33.062064 | orchestrator | Monday 05 May 2025 00:49:54 +0000 (0:00:00.678) 0:06:05.117 ************ 2025-05-05 00:56:33.062070 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.062075 | orchestrator | 2025-05-05 00:56:33.062079 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-05 00:56:33.062084 | orchestrator | Monday 05 May 2025 00:49:55 +0000 (0:00:00.580) 0:06:05.698 ************ 2025-05-05 00:56:33.062089 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.062094 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.062099 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.062104 | orchestrator | 2025-05-05 00:56:33.062109 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-05 00:56:33.062114 | orchestrator | Monday 05 May 2025 00:49:56 +0000 (0:00:01.004) 0:06:06.702 ************ 2025-05-05 00:56:33.062119 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.062123 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.062128 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.062133 | orchestrator | 2025-05-05 00:56:33.062138 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-05 00:56:33.062143 | orchestrator | Monday 05 May 2025 00:49:56 +0000 (0:00:00.434) 0:06:07.137 ************ 2025-05-05 00:56:33.062148 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 00:56:33.062153 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 00:56:33.062158 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 00:56:33.062163 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-05 00:56:33.062173 | orchestrator | 2025-05-05 00:56:33.062178 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-05 00:56:33.062183 | orchestrator | Monday 05 May 2025 00:50:05 +0000 (0:00:08.335) 0:06:15.472 ************ 2025-05-05 00:56:33.062188 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.062197 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.062202 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.062207 | orchestrator | 2025-05-05 00:56:33.062212 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-05 00:56:33.062217 | orchestrator | Monday 05 May 2025 00:50:05 +0000 (0:00:00.389) 0:06:15.862 ************ 2025-05-05 00:56:33.062222 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-05 00:56:33.062227 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-05 00:56:33.062232 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-05 00:56:33.062236 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-05 00:56:33.062242 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:56:33.062249 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:56:33.062257 | orchestrator | 2025-05-05 00:56:33.062264 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-05 00:56:33.062272 | orchestrator | Monday 05 May 2025 00:50:07 +0000 (0:00:01.794) 0:06:17.656 ************ 2025-05-05 00:56:33.062279 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-05 00:56:33.062287 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-05 00:56:33.062295 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-05 00:56:33.062302 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 00:56:33.062346 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-05 00:56:33.062352 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-05 00:56:33.062357 | orchestrator | 2025-05-05 00:56:33.062362 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-05 00:56:33.062367 | orchestrator | Monday 05 May 2025 00:50:08 +0000 (0:00:01.493) 0:06:19.150 ************ 2025-05-05 00:56:33.062372 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.062377 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.062382 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.062387 | orchestrator | 2025-05-05 00:56:33.062392 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-05 00:56:33.062398 | orchestrator | Monday 05 May 2025 00:50:09 +0000 (0:00:00.716) 0:06:19.866 ************ 2025-05-05 00:56:33.062403 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.062408 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.062413 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.062418 | orchestrator | 2025-05-05 00:56:33.062423 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-05 00:56:33.062428 | orchestrator | Monday 05 May 2025 00:50:10 +0000 (0:00:00.336) 0:06:20.203 ************ 2025-05-05 00:56:33.062433 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.062438 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.062443 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.062448 | orchestrator | 2025-05-05 00:56:33.062453 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-05 00:56:33.062458 | orchestrator | Monday 05 May 2025 00:50:10 +0000 (0:00:00.337) 0:06:20.540 ************ 2025-05-05 00:56:33.062463 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.062468 | orchestrator | 2025-05-05 00:56:33.062476 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-05 00:56:33.062481 | orchestrator | Monday 05 May 2025 00:50:11 +0000 (0:00:00.832) 0:06:21.373 ************ 2025-05-05 00:56:33.062486 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.062495 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.062500 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.062505 | orchestrator | 2025-05-05 00:56:33.062510 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-05 00:56:33.062555 | orchestrator | Monday 05 May 2025 00:50:11 +0000 (0:00:00.362) 0:06:21.735 ************ 2025-05-05 00:56:33.062562 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.062567 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.062572 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.062577 | orchestrator | 2025-05-05 00:56:33.062582 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-05 00:56:33.062587 | orchestrator | Monday 05 May 2025 00:50:11 +0000 (0:00:00.429) 0:06:22.165 ************ 2025-05-05 00:56:33.062592 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.062597 | orchestrator | 2025-05-05 00:56:33.062602 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-05 00:56:33.062607 | orchestrator | Monday 05 May 2025 00:50:12 +0000 (0:00:00.681) 0:06:22.847 ************ 2025-05-05 00:56:33.062612 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.062617 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.062622 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.062627 | orchestrator | 2025-05-05 00:56:33.062632 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-05 00:56:33.062636 | orchestrator | Monday 05 May 2025 00:50:13 +0000 (0:00:01.104) 0:06:23.951 ************ 2025-05-05 00:56:33.062641 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.062646 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.062651 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.062656 | orchestrator | 2025-05-05 00:56:33.062661 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-05 00:56:33.062666 | orchestrator | Monday 05 May 2025 00:50:14 +0000 (0:00:01.099) 0:06:25.051 ************ 2025-05-05 00:56:33.062671 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.062676 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.062681 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.062686 | orchestrator | 2025-05-05 00:56:33.062690 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-05 00:56:33.062695 | orchestrator | Monday 05 May 2025 00:50:16 +0000 (0:00:01.876) 0:06:26.928 ************ 2025-05-05 00:56:33.062700 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.062705 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.062710 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.062715 | orchestrator | 2025-05-05 00:56:33.062720 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-05 00:56:33.062725 | orchestrator | Monday 05 May 2025 00:50:18 +0000 (0:00:01.832) 0:06:28.760 ************ 2025-05-05 00:56:33.062730 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.062734 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.062739 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-05 00:56:33.062744 | orchestrator | 2025-05-05 00:56:33.062749 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-05 00:56:33.062754 | orchestrator | Monday 05 May 2025 00:50:19 +0000 (0:00:00.791) 0:06:29.551 ************ 2025-05-05 00:56:33.062759 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-05 00:56:33.062764 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-05 00:56:33.062769 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-05 00:56:33.062773 | orchestrator | 2025-05-05 00:56:33.062778 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-05 00:56:33.062783 | orchestrator | Monday 05 May 2025 00:50:32 +0000 (0:00:13.565) 0:06:43.117 ************ 2025-05-05 00:56:33.062792 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-05 00:56:33.062797 | orchestrator | 2025-05-05 00:56:33.062802 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-05 00:56:33.062807 | orchestrator | Monday 05 May 2025 00:50:34 +0000 (0:00:01.489) 0:06:44.606 ************ 2025-05-05 00:56:33.062812 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.062817 | orchestrator | 2025-05-05 00:56:33.062821 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-05 00:56:33.062826 | orchestrator | Monday 05 May 2025 00:50:34 +0000 (0:00:00.433) 0:06:45.040 ************ 2025-05-05 00:56:33.062831 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.062836 | orchestrator | 2025-05-05 00:56:33.062841 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-05 00:56:33.062846 | orchestrator | Monday 05 May 2025 00:50:35 +0000 (0:00:00.537) 0:06:45.578 ************ 2025-05-05 00:56:33.062851 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-05 00:56:33.062856 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-05 00:56:33.062861 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-05 00:56:33.062866 | orchestrator | 2025-05-05 00:56:33.062870 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-05 00:56:33.062878 | orchestrator | Monday 05 May 2025 00:50:41 +0000 (0:00:06.180) 0:06:51.759 ************ 2025-05-05 00:56:33.062883 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-05 00:56:33.062888 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-05 00:56:33.062893 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-05 00:56:33.062898 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-05 00:56:33.062903 | orchestrator | 2025-05-05 00:56:33.062908 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-05 00:56:33.062913 | orchestrator | Monday 05 May 2025 00:50:46 +0000 (0:00:05.141) 0:06:56.900 ************ 2025-05-05 00:56:33.062918 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.062923 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.062927 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.062932 | orchestrator | 2025-05-05 00:56:33.062977 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-05 00:56:33.062984 | orchestrator | Monday 05 May 2025 00:50:47 +0000 (0:00:00.896) 0:06:57.797 ************ 2025-05-05 00:56:33.062989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.062994 | orchestrator | 2025-05-05 00:56:33.062999 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-05 00:56:33.063004 | orchestrator | Monday 05 May 2025 00:50:48 +0000 (0:00:00.575) 0:06:58.372 ************ 2025-05-05 00:56:33.063009 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.063014 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.063019 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.063024 | orchestrator | 2025-05-05 00:56:33.063029 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-05 00:56:33.063034 | orchestrator | Monday 05 May 2025 00:50:48 +0000 (0:00:00.346) 0:06:58.719 ************ 2025-05-05 00:56:33.063038 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.063043 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.063048 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.063053 | orchestrator | 2025-05-05 00:56:33.063058 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-05 00:56:33.063063 | orchestrator | Monday 05 May 2025 00:50:50 +0000 (0:00:01.508) 0:07:00.227 ************ 2025-05-05 00:56:33.063068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:56:33.063079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:56:33.063084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:56:33.063089 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.063094 | orchestrator | 2025-05-05 00:56:33.063099 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-05 00:56:33.063104 | orchestrator | Monday 05 May 2025 00:50:50 +0000 (0:00:00.742) 0:07:00.970 ************ 2025-05-05 00:56:33.063109 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.063114 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.063119 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.063124 | orchestrator | 2025-05-05 00:56:33.063128 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.063133 | orchestrator | Monday 05 May 2025 00:50:51 +0000 (0:00:00.359) 0:07:01.329 ************ 2025-05-05 00:56:33.063138 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.063143 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.063148 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.063156 | orchestrator | 2025-05-05 00:56:33.063161 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-05 00:56:33.063166 | orchestrator | 2025-05-05 00:56:33.063171 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-05 00:56:33.063176 | orchestrator | Monday 05 May 2025 00:50:53 +0000 (0:00:01.967) 0:07:03.296 ************ 2025-05-05 00:56:33.063181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.063186 | orchestrator | 2025-05-05 00:56:33.063191 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-05 00:56:33.063196 | orchestrator | Monday 05 May 2025 00:50:53 +0000 (0:00:00.727) 0:07:04.024 ************ 2025-05-05 00:56:33.063200 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063205 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063210 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063215 | orchestrator | 2025-05-05 00:56:33.063220 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-05 00:56:33.063225 | orchestrator | Monday 05 May 2025 00:50:54 +0000 (0:00:00.293) 0:07:04.318 ************ 2025-05-05 00:56:33.063230 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063235 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063239 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063244 | orchestrator | 2025-05-05 00:56:33.063249 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-05 00:56:33.063254 | orchestrator | Monday 05 May 2025 00:50:55 +0000 (0:00:00.903) 0:07:05.222 ************ 2025-05-05 00:56:33.063259 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063264 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063269 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063274 | orchestrator | 2025-05-05 00:56:33.063279 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-05 00:56:33.063283 | orchestrator | Monday 05 May 2025 00:50:55 +0000 (0:00:00.704) 0:07:05.926 ************ 2025-05-05 00:56:33.063288 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063293 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063298 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063303 | orchestrator | 2025-05-05 00:56:33.063319 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-05 00:56:33.063324 | orchestrator | Monday 05 May 2025 00:50:56 +0000 (0:00:00.720) 0:07:06.647 ************ 2025-05-05 00:56:33.063329 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063334 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063339 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063344 | orchestrator | 2025-05-05 00:56:33.063348 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-05 00:56:33.063353 | orchestrator | Monday 05 May 2025 00:50:56 +0000 (0:00:00.301) 0:07:06.948 ************ 2025-05-05 00:56:33.063362 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063367 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063372 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063376 | orchestrator | 2025-05-05 00:56:33.063384 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-05 00:56:33.063389 | orchestrator | Monday 05 May 2025 00:50:57 +0000 (0:00:00.510) 0:07:07.458 ************ 2025-05-05 00:56:33.063394 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063398 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063403 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063408 | orchestrator | 2025-05-05 00:56:33.063413 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-05 00:56:33.063430 | orchestrator | Monday 05 May 2025 00:50:57 +0000 (0:00:00.347) 0:07:07.806 ************ 2025-05-05 00:56:33.063436 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063441 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063446 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063451 | orchestrator | 2025-05-05 00:56:33.063456 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-05 00:56:33.063461 | orchestrator | Monday 05 May 2025 00:50:57 +0000 (0:00:00.381) 0:07:08.188 ************ 2025-05-05 00:56:33.063466 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063471 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063476 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063481 | orchestrator | 2025-05-05 00:56:33.063486 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-05 00:56:33.063491 | orchestrator | Monday 05 May 2025 00:50:58 +0000 (0:00:00.322) 0:07:08.510 ************ 2025-05-05 00:56:33.063496 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063501 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063506 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063511 | orchestrator | 2025-05-05 00:56:33.063516 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-05 00:56:33.063521 | orchestrator | Monday 05 May 2025 00:50:58 +0000 (0:00:00.600) 0:07:09.110 ************ 2025-05-05 00:56:33.063526 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063531 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063536 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063541 | orchestrator | 2025-05-05 00:56:33.063546 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-05 00:56:33.063551 | orchestrator | Monday 05 May 2025 00:50:59 +0000 (0:00:00.723) 0:07:09.833 ************ 2025-05-05 00:56:33.063556 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063561 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063565 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063570 | orchestrator | 2025-05-05 00:56:33.063575 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-05 00:56:33.063580 | orchestrator | Monday 05 May 2025 00:50:59 +0000 (0:00:00.322) 0:07:10.156 ************ 2025-05-05 00:56:33.063585 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063590 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063595 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063599 | orchestrator | 2025-05-05 00:56:33.063604 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-05 00:56:33.063609 | orchestrator | Monday 05 May 2025 00:51:00 +0000 (0:00:00.314) 0:07:10.471 ************ 2025-05-05 00:56:33.063614 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063619 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063624 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063628 | orchestrator | 2025-05-05 00:56:33.063633 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-05 00:56:33.063638 | orchestrator | Monday 05 May 2025 00:51:00 +0000 (0:00:00.587) 0:07:11.058 ************ 2025-05-05 00:56:33.063643 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063651 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063656 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063661 | orchestrator | 2025-05-05 00:56:33.063666 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-05 00:56:33.063671 | orchestrator | Monday 05 May 2025 00:51:01 +0000 (0:00:00.343) 0:07:11.402 ************ 2025-05-05 00:56:33.063676 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063680 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063685 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063690 | orchestrator | 2025-05-05 00:56:33.063695 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-05 00:56:33.063700 | orchestrator | Monday 05 May 2025 00:51:01 +0000 (0:00:00.325) 0:07:11.728 ************ 2025-05-05 00:56:33.063705 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063710 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063715 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063722 | orchestrator | 2025-05-05 00:56:33.063727 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-05 00:56:33.063732 | orchestrator | Monday 05 May 2025 00:51:01 +0000 (0:00:00.300) 0:07:12.028 ************ 2025-05-05 00:56:33.063737 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063742 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063747 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063752 | orchestrator | 2025-05-05 00:56:33.063757 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-05 00:56:33.063762 | orchestrator | Monday 05 May 2025 00:51:02 +0000 (0:00:00.577) 0:07:12.606 ************ 2025-05-05 00:56:33.063767 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063772 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063777 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063782 | orchestrator | 2025-05-05 00:56:33.063787 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-05 00:56:33.063792 | orchestrator | Monday 05 May 2025 00:51:02 +0000 (0:00:00.325) 0:07:12.932 ************ 2025-05-05 00:56:33.063796 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.063801 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.063806 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.063811 | orchestrator | 2025-05-05 00:56:33.063816 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.063821 | orchestrator | Monday 05 May 2025 00:51:03 +0000 (0:00:00.365) 0:07:13.297 ************ 2025-05-05 00:56:33.063826 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063831 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063836 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063841 | orchestrator | 2025-05-05 00:56:33.063846 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.063851 | orchestrator | Monday 05 May 2025 00:51:03 +0000 (0:00:00.308) 0:07:13.605 ************ 2025-05-05 00:56:33.063855 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063860 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063865 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063870 | orchestrator | 2025-05-05 00:56:33.063878 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.063894 | orchestrator | Monday 05 May 2025 00:51:04 +0000 (0:00:00.583) 0:07:14.189 ************ 2025-05-05 00:56:33.063900 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063905 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063910 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063915 | orchestrator | 2025-05-05 00:56:33.063920 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.063925 | orchestrator | Monday 05 May 2025 00:51:04 +0000 (0:00:00.356) 0:07:14.545 ************ 2025-05-05 00:56:33.063930 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063935 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063943 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063948 | orchestrator | 2025-05-05 00:56:33.063953 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.063958 | orchestrator | Monday 05 May 2025 00:51:04 +0000 (0:00:00.348) 0:07:14.893 ************ 2025-05-05 00:56:33.063963 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063968 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.063973 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.063978 | orchestrator | 2025-05-05 00:56:33.063983 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.063988 | orchestrator | Monday 05 May 2025 00:51:05 +0000 (0:00:00.366) 0:07:15.260 ************ 2025-05-05 00:56:33.063993 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.063998 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064003 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064008 | orchestrator | 2025-05-05 00:56:33.064013 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.064018 | orchestrator | Monday 05 May 2025 00:51:05 +0000 (0:00:00.545) 0:07:15.805 ************ 2025-05-05 00:56:33.064023 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064028 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064033 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064038 | orchestrator | 2025-05-05 00:56:33.064043 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.064048 | orchestrator | Monday 05 May 2025 00:51:05 +0000 (0:00:00.333) 0:07:16.139 ************ 2025-05-05 00:56:33.064053 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064058 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064063 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064067 | orchestrator | 2025-05-05 00:56:33.064072 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.064077 | orchestrator | Monday 05 May 2025 00:51:06 +0000 (0:00:00.328) 0:07:16.467 ************ 2025-05-05 00:56:33.064082 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064087 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064092 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064097 | orchestrator | 2025-05-05 00:56:33.064102 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.064107 | orchestrator | Monday 05 May 2025 00:51:06 +0000 (0:00:00.356) 0:07:16.824 ************ 2025-05-05 00:56:33.064112 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064117 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064122 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064127 | orchestrator | 2025-05-05 00:56:33.064132 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.064136 | orchestrator | Monday 05 May 2025 00:51:07 +0000 (0:00:00.579) 0:07:17.404 ************ 2025-05-05 00:56:33.064141 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064146 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064151 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064156 | orchestrator | 2025-05-05 00:56:33.064161 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.064166 | orchestrator | Monday 05 May 2025 00:51:07 +0000 (0:00:00.332) 0:07:17.737 ************ 2025-05-05 00:56:33.064171 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064176 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064180 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064185 | orchestrator | 2025-05-05 00:56:33.064190 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.064195 | orchestrator | Monday 05 May 2025 00:51:07 +0000 (0:00:00.330) 0:07:18.067 ************ 2025-05-05 00:56:33.064200 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.064208 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.064213 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064218 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.064223 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.064228 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064233 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.064237 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.064242 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064247 | orchestrator | 2025-05-05 00:56:33.064252 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.064257 | orchestrator | Monday 05 May 2025 00:51:08 +0000 (0:00:00.460) 0:07:18.527 ************ 2025-05-05 00:56:33.064262 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-05 00:56:33.064269 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-05 00:56:33.064274 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-05 00:56:33.064279 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-05 00:56:33.064283 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064288 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064293 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-05 00:56:33.064298 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-05 00:56:33.064303 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064321 | orchestrator | 2025-05-05 00:56:33.064326 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.064344 | orchestrator | Monday 05 May 2025 00:51:08 +0000 (0:00:00.617) 0:07:19.145 ************ 2025-05-05 00:56:33.064350 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064355 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064360 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064365 | orchestrator | 2025-05-05 00:56:33.064370 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.064375 | orchestrator | Monday 05 May 2025 00:51:09 +0000 (0:00:00.347) 0:07:19.492 ************ 2025-05-05 00:56:33.064380 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064384 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064389 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064394 | orchestrator | 2025-05-05 00:56:33.064399 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.064404 | orchestrator | Monday 05 May 2025 00:51:09 +0000 (0:00:00.350) 0:07:19.843 ************ 2025-05-05 00:56:33.064409 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064414 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064418 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064423 | orchestrator | 2025-05-05 00:56:33.064428 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.064433 | orchestrator | Monday 05 May 2025 00:51:09 +0000 (0:00:00.328) 0:07:20.171 ************ 2025-05-05 00:56:33.064438 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064443 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064448 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064453 | orchestrator | 2025-05-05 00:56:33.064458 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.064462 | orchestrator | Monday 05 May 2025 00:51:10 +0000 (0:00:00.549) 0:07:20.721 ************ 2025-05-05 00:56:33.064467 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064472 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064477 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064482 | orchestrator | 2025-05-05 00:56:33.064487 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.064496 | orchestrator | Monday 05 May 2025 00:51:10 +0000 (0:00:00.334) 0:07:21.055 ************ 2025-05-05 00:56:33.064501 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064506 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064511 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064515 | orchestrator | 2025-05-05 00:56:33.064520 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.064528 | orchestrator | Monday 05 May 2025 00:51:11 +0000 (0:00:00.356) 0:07:21.411 ************ 2025-05-05 00:56:33.064533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.064538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.064543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.064547 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064552 | orchestrator | 2025-05-05 00:56:33.064557 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.064562 | orchestrator | Monday 05 May 2025 00:51:11 +0000 (0:00:00.435) 0:07:21.847 ************ 2025-05-05 00:56:33.064567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.064572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.064577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.064582 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064586 | orchestrator | 2025-05-05 00:56:33.064591 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.064596 | orchestrator | Monday 05 May 2025 00:51:12 +0000 (0:00:00.436) 0:07:22.283 ************ 2025-05-05 00:56:33.064601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.064606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.064611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.064616 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064620 | orchestrator | 2025-05-05 00:56:33.064625 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.064630 | orchestrator | Monday 05 May 2025 00:51:12 +0000 (0:00:00.719) 0:07:23.003 ************ 2025-05-05 00:56:33.064635 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064640 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064645 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064650 | orchestrator | 2025-05-05 00:56:33.064654 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.064659 | orchestrator | Monday 05 May 2025 00:51:13 +0000 (0:00:00.563) 0:07:23.566 ************ 2025-05-05 00:56:33.064664 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.064669 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064674 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.064679 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064684 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.064689 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064694 | orchestrator | 2025-05-05 00:56:33.064699 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.064704 | orchestrator | Monday 05 May 2025 00:51:13 +0000 (0:00:00.456) 0:07:24.023 ************ 2025-05-05 00:56:33.064709 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064714 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064719 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064723 | orchestrator | 2025-05-05 00:56:33.064728 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.064733 | orchestrator | Monday 05 May 2025 00:51:14 +0000 (0:00:00.332) 0:07:24.356 ************ 2025-05-05 00:56:33.064738 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064743 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064751 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064756 | orchestrator | 2025-05-05 00:56:33.064772 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.064778 | orchestrator | Monday 05 May 2025 00:51:14 +0000 (0:00:00.324) 0:07:24.681 ************ 2025-05-05 00:56:33.064783 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.064788 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064793 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.064798 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064803 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.064808 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064813 | orchestrator | 2025-05-05 00:56:33.064817 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.064822 | orchestrator | Monday 05 May 2025 00:51:15 +0000 (0:00:00.768) 0:07:25.450 ************ 2025-05-05 00:56:33.064827 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.064832 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064837 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.064842 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064847 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.064852 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064857 | orchestrator | 2025-05-05 00:56:33.064862 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.064867 | orchestrator | Monday 05 May 2025 00:51:15 +0000 (0:00:00.339) 0:07:25.789 ************ 2025-05-05 00:56:33.064872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.064877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.064882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.064887 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:56:33.064892 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:56:33.064926 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:56:33.064931 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064936 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064941 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:56:33.064946 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:56:33.064951 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:56:33.064956 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064961 | orchestrator | 2025-05-05 00:56:33.064965 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.064970 | orchestrator | Monday 05 May 2025 00:51:16 +0000 (0:00:00.608) 0:07:26.398 ************ 2025-05-05 00:56:33.064975 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.064980 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.064985 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.064990 | orchestrator | 2025-05-05 00:56:33.064995 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-05 00:56:33.065000 | orchestrator | Monday 05 May 2025 00:51:17 +0000 (0:00:00.815) 0:07:27.214 ************ 2025-05-05 00:56:33.065005 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.065010 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065014 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.065019 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065024 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.065029 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065037 | orchestrator | 2025-05-05 00:56:33.065042 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-05 00:56:33.065047 | orchestrator | Monday 05 May 2025 00:51:17 +0000 (0:00:00.575) 0:07:27.789 ************ 2025-05-05 00:56:33.065052 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065057 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065062 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065067 | orchestrator | 2025-05-05 00:56:33.065072 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-05 00:56:33.065077 | orchestrator | Monday 05 May 2025 00:51:18 +0000 (0:00:00.869) 0:07:28.658 ************ 2025-05-05 00:56:33.065081 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065110 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065116 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065121 | orchestrator | 2025-05-05 00:56:33.065126 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-05 00:56:33.065131 | orchestrator | Monday 05 May 2025 00:51:19 +0000 (0:00:00.541) 0:07:29.199 ************ 2025-05-05 00:56:33.065136 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.065141 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.065146 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.065151 | orchestrator | 2025-05-05 00:56:33.065155 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-05 00:56:33.065161 | orchestrator | Monday 05 May 2025 00:51:19 +0000 (0:00:00.721) 0:07:29.921 ************ 2025-05-05 00:56:33.065168 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-05 00:56:33.065173 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:56:33.065178 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:56:33.065183 | orchestrator | 2025-05-05 00:56:33.065188 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-05 00:56:33.065193 | orchestrator | Monday 05 May 2025 00:51:20 +0000 (0:00:00.742) 0:07:30.664 ************ 2025-05-05 00:56:33.065212 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.065218 | orchestrator | 2025-05-05 00:56:33.065223 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-05 00:56:33.065228 | orchestrator | Monday 05 May 2025 00:51:21 +0000 (0:00:00.537) 0:07:31.202 ************ 2025-05-05 00:56:33.065233 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065238 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065243 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065248 | orchestrator | 2025-05-05 00:56:33.065253 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-05 00:56:33.065258 | orchestrator | Monday 05 May 2025 00:51:21 +0000 (0:00:00.546) 0:07:31.748 ************ 2025-05-05 00:56:33.065263 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065268 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065273 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065278 | orchestrator | 2025-05-05 00:56:33.065283 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-05 00:56:33.065287 | orchestrator | Monday 05 May 2025 00:51:21 +0000 (0:00:00.317) 0:07:32.066 ************ 2025-05-05 00:56:33.065292 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065297 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065302 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065338 | orchestrator | 2025-05-05 00:56:33.065343 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-05 00:56:33.065348 | orchestrator | Monday 05 May 2025 00:51:22 +0000 (0:00:00.318) 0:07:32.384 ************ 2025-05-05 00:56:33.065353 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065358 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065367 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065372 | orchestrator | 2025-05-05 00:56:33.065377 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-05 00:56:33.065382 | orchestrator | Monday 05 May 2025 00:51:22 +0000 (0:00:00.323) 0:07:32.708 ************ 2025-05-05 00:56:33.065387 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.065392 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.065397 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.065402 | orchestrator | 2025-05-05 00:56:33.065407 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-05 00:56:33.065412 | orchestrator | Monday 05 May 2025 00:51:23 +0000 (0:00:00.834) 0:07:33.543 ************ 2025-05-05 00:56:33.065417 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.065422 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.065427 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.065432 | orchestrator | 2025-05-05 00:56:33.065437 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-05 00:56:33.065443 | orchestrator | Monday 05 May 2025 00:51:23 +0000 (0:00:00.349) 0:07:33.892 ************ 2025-05-05 00:56:33.065448 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-05 00:56:33.065456 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-05 00:56:33.065461 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-05 00:56:33.065466 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-05 00:56:33.065471 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-05 00:56:33.065476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-05 00:56:33.065481 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-05 00:56:33.065486 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-05 00:56:33.065491 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-05 00:56:33.065496 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-05 00:56:33.065501 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-05 00:56:33.065506 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-05 00:56:33.065511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-05 00:56:33.065516 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-05 00:56:33.065521 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-05 00:56:33.065526 | orchestrator | 2025-05-05 00:56:33.065531 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-05 00:56:33.065536 | orchestrator | Monday 05 May 2025 00:51:26 +0000 (0:00:03.066) 0:07:36.959 ************ 2025-05-05 00:56:33.065541 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065546 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065551 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065556 | orchestrator | 2025-05-05 00:56:33.065561 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-05 00:56:33.065566 | orchestrator | Monday 05 May 2025 00:51:27 +0000 (0:00:00.565) 0:07:37.524 ************ 2025-05-05 00:56:33.065571 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.065576 | orchestrator | 2025-05-05 00:56:33.065583 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-05 00:56:33.065588 | orchestrator | Monday 05 May 2025 00:51:27 +0000 (0:00:00.576) 0:07:38.101 ************ 2025-05-05 00:56:33.065596 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-05 00:56:33.065616 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-05 00:56:33.065622 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-05 00:56:33.065627 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-05 00:56:33.065632 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-05 00:56:33.065637 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-05 00:56:33.065642 | orchestrator | 2025-05-05 00:56:33.065647 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-05 00:56:33.065652 | orchestrator | Monday 05 May 2025 00:51:28 +0000 (0:00:00.976) 0:07:39.078 ************ 2025-05-05 00:56:33.065657 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:56:33.065662 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.065668 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-05 00:56:33.065672 | orchestrator | 2025-05-05 00:56:33.065677 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-05 00:56:33.065682 | orchestrator | Monday 05 May 2025 00:51:30 +0000 (0:00:01.977) 0:07:41.055 ************ 2025-05-05 00:56:33.065687 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-05 00:56:33.065692 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.065697 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.065705 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-05 00:56:33.065710 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.065747 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.065752 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-05 00:56:33.065757 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.065762 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.065767 | orchestrator | 2025-05-05 00:56:33.065772 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-05 00:56:33.065777 | orchestrator | Monday 05 May 2025 00:51:32 +0000 (0:00:01.153) 0:07:42.208 ************ 2025-05-05 00:56:33.065782 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-05 00:56:33.065786 | orchestrator | 2025-05-05 00:56:33.065791 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-05 00:56:33.065796 | orchestrator | Monday 05 May 2025 00:51:34 +0000 (0:00:02.427) 0:07:44.636 ************ 2025-05-05 00:56:33.065801 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.065806 | orchestrator | 2025-05-05 00:56:33.065811 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-05 00:56:33.065816 | orchestrator | Monday 05 May 2025 00:51:35 +0000 (0:00:00.713) 0:07:45.350 ************ 2025-05-05 00:56:33.065821 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065826 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065831 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065836 | orchestrator | 2025-05-05 00:56:33.065841 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-05 00:56:33.065846 | orchestrator | Monday 05 May 2025 00:51:35 +0000 (0:00:00.323) 0:07:45.673 ************ 2025-05-05 00:56:33.065851 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065859 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065864 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065869 | orchestrator | 2025-05-05 00:56:33.065874 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-05 00:56:33.065879 | orchestrator | Monday 05 May 2025 00:51:35 +0000 (0:00:00.312) 0:07:45.986 ************ 2025-05-05 00:56:33.065888 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.065893 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.065898 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.065903 | orchestrator | 2025-05-05 00:56:33.065908 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-05 00:56:33.065913 | orchestrator | Monday 05 May 2025 00:51:36 +0000 (0:00:00.345) 0:07:46.332 ************ 2025-05-05 00:56:33.065918 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.065923 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.065928 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.065933 | orchestrator | 2025-05-05 00:56:33.065938 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-05 00:56:33.065942 | orchestrator | Monday 05 May 2025 00:51:36 +0000 (0:00:00.545) 0:07:46.877 ************ 2025-05-05 00:56:33.065947 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.065953 | orchestrator | 2025-05-05 00:56:33.065958 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-05 00:56:33.065963 | orchestrator | Monday 05 May 2025 00:51:37 +0000 (0:00:00.572) 0:07:47.450 ************ 2025-05-05 00:56:33.065968 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-19ded391-41bb-58c4-acef-51f998367f5e', 'data_vg': 'ceph-19ded391-41bb-58c4-acef-51f998367f5e'}) 2025-05-05 00:56:33.065974 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f', 'data_vg': 'ceph-09f6cbbb-bab3-56dc-a9fe-f7e4ce5d119f'}) 2025-05-05 00:56:33.065979 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b45d62aa-c8ca-51ec-bff2-6c96656db621', 'data_vg': 'ceph-b45d62aa-c8ca-51ec-bff2-6c96656db621'}) 2025-05-05 00:56:33.065984 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1dbbf782-cf90-597f-b1d9-d891fd7b35f3', 'data_vg': 'ceph-1dbbf782-cf90-597f-b1d9-d891fd7b35f3'}) 2025-05-05 00:56:33.066004 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ac6a629e-412f-52b8-abc2-7f30e47159be', 'data_vg': 'ceph-ac6a629e-412f-52b8-abc2-7f30e47159be'}) 2025-05-05 00:56:33.066010 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e', 'data_vg': 'ceph-5b3e4e2d-95bb-5d7e-b29f-9e0b9408011e'}) 2025-05-05 00:56:33.066039 | orchestrator | 2025-05-05 00:56:33.066044 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-05 00:56:33.066050 | orchestrator | Monday 05 May 2025 00:52:15 +0000 (0:00:38.452) 0:08:25.902 ************ 2025-05-05 00:56:33.066055 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066060 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066065 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066070 | orchestrator | 2025-05-05 00:56:33.066075 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-05 00:56:33.066080 | orchestrator | Monday 05 May 2025 00:52:16 +0000 (0:00:00.452) 0:08:26.355 ************ 2025-05-05 00:56:33.066085 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.066090 | orchestrator | 2025-05-05 00:56:33.066095 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-05 00:56:33.066100 | orchestrator | Monday 05 May 2025 00:52:16 +0000 (0:00:00.534) 0:08:26.889 ************ 2025-05-05 00:56:33.066105 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.066109 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.066114 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.066119 | orchestrator | 2025-05-05 00:56:33.066125 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-05 00:56:33.066130 | orchestrator | Monday 05 May 2025 00:52:17 +0000 (0:00:00.654) 0:08:27.544 ************ 2025-05-05 00:56:33.066135 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.066140 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.066148 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.066156 | orchestrator | 2025-05-05 00:56:33.066161 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-05 00:56:33.066167 | orchestrator | Monday 05 May 2025 00:52:19 +0000 (0:00:01.971) 0:08:29.516 ************ 2025-05-05 00:56:33.066172 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.066177 | orchestrator | 2025-05-05 00:56:33.066182 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-05 00:56:33.066186 | orchestrator | Monday 05 May 2025 00:52:19 +0000 (0:00:00.620) 0:08:30.137 ************ 2025-05-05 00:56:33.066191 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.066196 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.066201 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.066206 | orchestrator | 2025-05-05 00:56:33.066211 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-05 00:56:33.066218 | orchestrator | Monday 05 May 2025 00:52:21 +0000 (0:00:01.380) 0:08:31.517 ************ 2025-05-05 00:56:33.066223 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.066228 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.066233 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.066238 | orchestrator | 2025-05-05 00:56:33.066243 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-05 00:56:33.066248 | orchestrator | Monday 05 May 2025 00:52:22 +0000 (0:00:01.123) 0:08:32.640 ************ 2025-05-05 00:56:33.066253 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.066258 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.066263 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.066268 | orchestrator | 2025-05-05 00:56:33.066273 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-05 00:56:33.066277 | orchestrator | Monday 05 May 2025 00:52:24 +0000 (0:00:01.680) 0:08:34.321 ************ 2025-05-05 00:56:33.066282 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066287 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066292 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066297 | orchestrator | 2025-05-05 00:56:33.066302 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-05 00:56:33.066320 | orchestrator | Monday 05 May 2025 00:52:24 +0000 (0:00:00.345) 0:08:34.666 ************ 2025-05-05 00:56:33.066326 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066331 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066336 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066341 | orchestrator | 2025-05-05 00:56:33.066346 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-05 00:56:33.066351 | orchestrator | Monday 05 May 2025 00:52:25 +0000 (0:00:00.568) 0:08:35.235 ************ 2025-05-05 00:56:33.066355 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-05 00:56:33.066360 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-05-05 00:56:33.066365 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-05-05 00:56:33.066370 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-05-05 00:56:33.066375 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-05-05 00:56:33.066380 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-05-05 00:56:33.066385 | orchestrator | 2025-05-05 00:56:33.066390 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-05 00:56:33.066394 | orchestrator | Monday 05 May 2025 00:52:26 +0000 (0:00:00.985) 0:08:36.221 ************ 2025-05-05 00:56:33.066399 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-05 00:56:33.066404 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-05 00:56:33.066409 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-05 00:56:33.066414 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-05 00:56:33.066419 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-05-05 00:56:33.066423 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-05-05 00:56:33.066431 | orchestrator | 2025-05-05 00:56:33.066436 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-05 00:56:33.066455 | orchestrator | Monday 05 May 2025 00:52:29 +0000 (0:00:03.331) 0:08:39.552 ************ 2025-05-05 00:56:33.066462 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066467 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-05 00:56:33.066477 | orchestrator | 2025-05-05 00:56:33.066482 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-05 00:56:33.066487 | orchestrator | Monday 05 May 2025 00:52:32 +0000 (0:00:03.070) 0:08:42.623 ************ 2025-05-05 00:56:33.066491 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066496 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066501 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-05 00:56:33.066506 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-05 00:56:33.066511 | orchestrator | 2025-05-05 00:56:33.066516 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-05 00:56:33.066521 | orchestrator | Monday 05 May 2025 00:52:45 +0000 (0:00:12.681) 0:08:55.305 ************ 2025-05-05 00:56:33.066526 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066530 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066535 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066540 | orchestrator | 2025-05-05 00:56:33.066545 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-05 00:56:33.066550 | orchestrator | Monday 05 May 2025 00:52:45 +0000 (0:00:00.468) 0:08:55.774 ************ 2025-05-05 00:56:33.066555 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066562 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066567 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066572 | orchestrator | 2025-05-05 00:56:33.066577 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-05 00:56:33.066582 | orchestrator | Monday 05 May 2025 00:52:46 +0000 (0:00:01.152) 0:08:56.926 ************ 2025-05-05 00:56:33.066587 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.066592 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.066597 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.066602 | orchestrator | 2025-05-05 00:56:33.066607 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-05 00:56:33.066612 | orchestrator | Monday 05 May 2025 00:52:47 +0000 (0:00:00.889) 0:08:57.816 ************ 2025-05-05 00:56:33.066616 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.066621 | orchestrator | 2025-05-05 00:56:33.066626 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-05 00:56:33.066631 | orchestrator | Monday 05 May 2025 00:52:48 +0000 (0:00:00.558) 0:08:58.375 ************ 2025-05-05 00:56:33.066636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.066641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.066646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.066651 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066656 | orchestrator | 2025-05-05 00:56:33.066661 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-05 00:56:33.066666 | orchestrator | Monday 05 May 2025 00:52:48 +0000 (0:00:00.427) 0:08:58.803 ************ 2025-05-05 00:56:33.066671 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066676 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066681 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066685 | orchestrator | 2025-05-05 00:56:33.066690 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-05 00:56:33.066699 | orchestrator | Monday 05 May 2025 00:52:48 +0000 (0:00:00.302) 0:08:59.105 ************ 2025-05-05 00:56:33.066704 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066708 | orchestrator | 2025-05-05 00:56:33.066713 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-05 00:56:33.066718 | orchestrator | Monday 05 May 2025 00:52:49 +0000 (0:00:00.268) 0:08:59.374 ************ 2025-05-05 00:56:33.066723 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066728 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066733 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066738 | orchestrator | 2025-05-05 00:56:33.066743 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-05 00:56:33.066774 | orchestrator | Monday 05 May 2025 00:52:49 +0000 (0:00:00.697) 0:09:00.071 ************ 2025-05-05 00:56:33.066779 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066784 | orchestrator | 2025-05-05 00:56:33.066790 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-05 00:56:33.066795 | orchestrator | Monday 05 May 2025 00:52:50 +0000 (0:00:00.290) 0:09:00.362 ************ 2025-05-05 00:56:33.066800 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066804 | orchestrator | 2025-05-05 00:56:33.066809 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-05 00:56:33.066814 | orchestrator | Monday 05 May 2025 00:52:50 +0000 (0:00:00.277) 0:09:00.640 ************ 2025-05-05 00:56:33.066819 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066824 | orchestrator | 2025-05-05 00:56:33.066829 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-05 00:56:33.066834 | orchestrator | Monday 05 May 2025 00:52:50 +0000 (0:00:00.146) 0:09:00.786 ************ 2025-05-05 00:56:33.066839 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066844 | orchestrator | 2025-05-05 00:56:33.066849 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-05 00:56:33.066854 | orchestrator | Monday 05 May 2025 00:52:50 +0000 (0:00:00.325) 0:09:01.111 ************ 2025-05-05 00:56:33.066859 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066864 | orchestrator | 2025-05-05 00:56:33.066869 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-05 00:56:33.066874 | orchestrator | Monday 05 May 2025 00:52:51 +0000 (0:00:00.257) 0:09:01.369 ************ 2025-05-05 00:56:33.066879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.066898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.066904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.066909 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066914 | orchestrator | 2025-05-05 00:56:33.066919 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-05 00:56:33.066924 | orchestrator | Monday 05 May 2025 00:52:51 +0000 (0:00:00.408) 0:09:01.778 ************ 2025-05-05 00:56:33.066929 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066934 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.066939 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.066947 | orchestrator | 2025-05-05 00:56:33.066952 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-05 00:56:33.066957 | orchestrator | Monday 05 May 2025 00:52:52 +0000 (0:00:00.624) 0:09:02.402 ************ 2025-05-05 00:56:33.066962 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066967 | orchestrator | 2025-05-05 00:56:33.066972 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-05 00:56:33.066977 | orchestrator | Monday 05 May 2025 00:52:52 +0000 (0:00:00.236) 0:09:02.639 ************ 2025-05-05 00:56:33.066982 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.066987 | orchestrator | 2025-05-05 00:56:33.066992 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.066997 | orchestrator | Monday 05 May 2025 00:52:52 +0000 (0:00:00.248) 0:09:02.887 ************ 2025-05-05 00:56:33.067006 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.067011 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.067015 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.067020 | orchestrator | 2025-05-05 00:56:33.067025 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-05 00:56:33.067030 | orchestrator | 2025-05-05 00:56:33.067035 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-05 00:56:33.067040 | orchestrator | Monday 05 May 2025 00:52:55 +0000 (0:00:02.743) 0:09:05.631 ************ 2025-05-05 00:56:33.067045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.067051 | orchestrator | 2025-05-05 00:56:33.067056 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-05 00:56:33.067061 | orchestrator | Monday 05 May 2025 00:52:56 +0000 (0:00:01.231) 0:09:06.862 ************ 2025-05-05 00:56:33.067066 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067070 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.067075 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067080 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067085 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.067090 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.067095 | orchestrator | 2025-05-05 00:56:33.067100 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-05 00:56:33.067105 | orchestrator | Monday 05 May 2025 00:52:57 +0000 (0:00:00.959) 0:09:07.821 ************ 2025-05-05 00:56:33.067110 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067115 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067120 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067125 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067130 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.067135 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.067140 | orchestrator | 2025-05-05 00:56:33.067144 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-05 00:56:33.067150 | orchestrator | Monday 05 May 2025 00:52:58 +0000 (0:00:00.996) 0:09:08.817 ************ 2025-05-05 00:56:33.067155 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067160 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067165 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067170 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067175 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.067180 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.067185 | orchestrator | 2025-05-05 00:56:33.067190 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-05 00:56:33.067195 | orchestrator | Monday 05 May 2025 00:52:59 +0000 (0:00:01.203) 0:09:10.021 ************ 2025-05-05 00:56:33.067200 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067205 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067209 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067214 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067219 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.067224 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.067229 | orchestrator | 2025-05-05 00:56:33.067234 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-05 00:56:33.067241 | orchestrator | Monday 05 May 2025 00:53:00 +0000 (0:00:01.058) 0:09:11.080 ************ 2025-05-05 00:56:33.067246 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067251 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.067257 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067262 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.067266 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.067271 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067276 | orchestrator | 2025-05-05 00:56:33.067281 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-05 00:56:33.067289 | orchestrator | Monday 05 May 2025 00:53:01 +0000 (0:00:01.040) 0:09:12.120 ************ 2025-05-05 00:56:33.067294 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067299 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067313 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067319 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067324 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067329 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067334 | orchestrator | 2025-05-05 00:56:33.067338 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-05 00:56:33.067343 | orchestrator | Monday 05 May 2025 00:53:02 +0000 (0:00:00.711) 0:09:12.832 ************ 2025-05-05 00:56:33.067348 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067353 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067359 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067364 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067382 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067388 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067394 | orchestrator | 2025-05-05 00:56:33.067399 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-05 00:56:33.067404 | orchestrator | Monday 05 May 2025 00:53:03 +0000 (0:00:00.975) 0:09:13.808 ************ 2025-05-05 00:56:33.067408 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067413 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067418 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067423 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067431 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067436 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067441 | orchestrator | 2025-05-05 00:56:33.067446 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-05 00:56:33.067451 | orchestrator | Monday 05 May 2025 00:53:04 +0000 (0:00:00.630) 0:09:14.438 ************ 2025-05-05 00:56:33.067456 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067461 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067466 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067471 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067477 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067482 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067487 | orchestrator | 2025-05-05 00:56:33.067492 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-05 00:56:33.067497 | orchestrator | Monday 05 May 2025 00:53:05 +0000 (0:00:00.909) 0:09:15.348 ************ 2025-05-05 00:56:33.067502 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067507 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067512 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067518 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067522 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067530 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067535 | orchestrator | 2025-05-05 00:56:33.067541 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-05 00:56:33.067546 | orchestrator | Monday 05 May 2025 00:53:05 +0000 (0:00:00.774) 0:09:16.123 ************ 2025-05-05 00:56:33.067551 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.067556 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.067561 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.067566 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067570 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.067575 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.067581 | orchestrator | 2025-05-05 00:56:33.067586 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-05 00:56:33.067591 | orchestrator | Monday 05 May 2025 00:53:07 +0000 (0:00:01.492) 0:09:17.616 ************ 2025-05-05 00:56:33.067596 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067604 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067609 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067614 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067619 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067624 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067629 | orchestrator | 2025-05-05 00:56:33.067635 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-05 00:56:33.067640 | orchestrator | Monday 05 May 2025 00:53:08 +0000 (0:00:00.678) 0:09:18.295 ************ 2025-05-05 00:56:33.067645 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.067650 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.067655 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.067660 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067665 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067670 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067674 | orchestrator | 2025-05-05 00:56:33.067679 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-05 00:56:33.067684 | orchestrator | Monday 05 May 2025 00:53:08 +0000 (0:00:00.857) 0:09:19.152 ************ 2025-05-05 00:56:33.067689 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067694 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067699 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067704 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067709 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.067714 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.067719 | orchestrator | 2025-05-05 00:56:33.067724 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-05 00:56:33.067729 | orchestrator | Monday 05 May 2025 00:53:09 +0000 (0:00:00.626) 0:09:19.778 ************ 2025-05-05 00:56:33.067734 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067739 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067744 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067749 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067753 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.067758 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.067763 | orchestrator | 2025-05-05 00:56:33.067768 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-05 00:56:33.067774 | orchestrator | Monday 05 May 2025 00:53:10 +0000 (0:00:00.938) 0:09:20.717 ************ 2025-05-05 00:56:33.067779 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067784 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067789 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067794 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067799 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.067803 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.067808 | orchestrator | 2025-05-05 00:56:33.067813 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-05 00:56:33.067818 | orchestrator | Monday 05 May 2025 00:53:11 +0000 (0:00:00.630) 0:09:21.347 ************ 2025-05-05 00:56:33.067823 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067829 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067836 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067842 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067847 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067852 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067857 | orchestrator | 2025-05-05 00:56:33.067862 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-05 00:56:33.067867 | orchestrator | Monday 05 May 2025 00:53:11 +0000 (0:00:00.727) 0:09:22.074 ************ 2025-05-05 00:56:33.067871 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.067890 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.067896 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.067901 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067906 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067914 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067919 | orchestrator | 2025-05-05 00:56:33.067924 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-05 00:56:33.067929 | orchestrator | Monday 05 May 2025 00:53:12 +0000 (0:00:00.534) 0:09:22.609 ************ 2025-05-05 00:56:33.067934 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.067939 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.067944 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.067948 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.067953 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.067958 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.067963 | orchestrator | 2025-05-05 00:56:33.067968 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-05 00:56:33.067973 | orchestrator | Monday 05 May 2025 00:53:13 +0000 (0:00:00.810) 0:09:23.420 ************ 2025-05-05 00:56:33.067978 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.067983 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.067988 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.067993 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.067998 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.068002 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.068007 | orchestrator | 2025-05-05 00:56:33.068012 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.068020 | orchestrator | Monday 05 May 2025 00:53:13 +0000 (0:00:00.568) 0:09:23.988 ************ 2025-05-05 00:56:33.068025 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068030 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068035 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068040 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068045 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068050 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068055 | orchestrator | 2025-05-05 00:56:33.068060 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.068065 | orchestrator | Monday 05 May 2025 00:53:14 +0000 (0:00:00.673) 0:09:24.662 ************ 2025-05-05 00:56:33.068069 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068074 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068079 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068084 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068089 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068094 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068099 | orchestrator | 2025-05-05 00:56:33.068104 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.068109 | orchestrator | Monday 05 May 2025 00:53:14 +0000 (0:00:00.525) 0:09:25.187 ************ 2025-05-05 00:56:33.068114 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068119 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068124 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068129 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068133 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068138 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068143 | orchestrator | 2025-05-05 00:56:33.068148 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.068153 | orchestrator | Monday 05 May 2025 00:53:15 +0000 (0:00:00.707) 0:09:25.895 ************ 2025-05-05 00:56:33.068158 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068163 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068168 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068173 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068178 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068183 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068188 | orchestrator | 2025-05-05 00:56:33.068193 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.068202 | orchestrator | Monday 05 May 2025 00:53:16 +0000 (0:00:00.504) 0:09:26.399 ************ 2025-05-05 00:56:33.068207 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068212 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068217 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068222 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068230 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068235 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068240 | orchestrator | 2025-05-05 00:56:33.068245 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.068250 | orchestrator | Monday 05 May 2025 00:53:16 +0000 (0:00:00.623) 0:09:27.023 ************ 2025-05-05 00:56:33.068255 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068260 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068265 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068270 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068275 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068280 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068284 | orchestrator | 2025-05-05 00:56:33.068289 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.068294 | orchestrator | Monday 05 May 2025 00:53:17 +0000 (0:00:00.512) 0:09:27.536 ************ 2025-05-05 00:56:33.068299 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068304 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068337 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068342 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068347 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068352 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068357 | orchestrator | 2025-05-05 00:56:33.068362 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.068367 | orchestrator | Monday 05 May 2025 00:53:18 +0000 (0:00:00.714) 0:09:28.251 ************ 2025-05-05 00:56:33.068372 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068377 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068382 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068387 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068392 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068397 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068402 | orchestrator | 2025-05-05 00:56:33.068422 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.068428 | orchestrator | Monday 05 May 2025 00:53:18 +0000 (0:00:00.674) 0:09:28.925 ************ 2025-05-05 00:56:33.068436 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068441 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068446 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068451 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068456 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068461 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068466 | orchestrator | 2025-05-05 00:56:33.068472 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.068477 | orchestrator | Monday 05 May 2025 00:53:19 +0000 (0:00:00.858) 0:09:29.784 ************ 2025-05-05 00:56:33.068482 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068486 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068491 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068496 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068501 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068506 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068511 | orchestrator | 2025-05-05 00:56:33.068516 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.068521 | orchestrator | Monday 05 May 2025 00:53:20 +0000 (0:00:00.705) 0:09:30.489 ************ 2025-05-05 00:56:33.068530 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068535 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068540 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068545 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068550 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068555 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068560 | orchestrator | 2025-05-05 00:56:33.068565 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.068570 | orchestrator | Monday 05 May 2025 00:53:21 +0000 (0:00:00.873) 0:09:31.362 ************ 2025-05-05 00:56:33.068575 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068580 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068585 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068590 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068594 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068599 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068604 | orchestrator | 2025-05-05 00:56:33.068609 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.068615 | orchestrator | Monday 05 May 2025 00:53:21 +0000 (0:00:00.609) 0:09:31.971 ************ 2025-05-05 00:56:33.068620 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.068625 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-05 00:56:33.068630 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068635 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.068640 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-05 00:56:33.068645 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068650 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.068655 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-05 00:56:33.068660 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068665 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.068670 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.068675 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068680 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.068685 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.068690 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068695 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.068700 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.068705 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068710 | orchestrator | 2025-05-05 00:56:33.068715 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.068720 | orchestrator | Monday 05 May 2025 00:53:22 +0000 (0:00:00.784) 0:09:32.756 ************ 2025-05-05 00:56:33.068725 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-05 00:56:33.068733 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-05 00:56:33.068738 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068746 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-05 00:56:33.068751 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-05 00:56:33.068756 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068761 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-05 00:56:33.068766 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-05 00:56:33.068771 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068776 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-05 00:56:33.068781 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-05 00:56:33.068786 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068791 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-05 00:56:33.068796 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-05 00:56:33.068804 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068809 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-05 00:56:33.068814 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-05 00:56:33.068819 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068824 | orchestrator | 2025-05-05 00:56:33.068829 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.068834 | orchestrator | Monday 05 May 2025 00:53:23 +0000 (0:00:00.683) 0:09:33.440 ************ 2025-05-05 00:56:33.068839 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068844 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068849 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068854 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068871 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068877 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068883 | orchestrator | 2025-05-05 00:56:33.068888 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.068893 | orchestrator | Monday 05 May 2025 00:53:24 +0000 (0:00:00.760) 0:09:34.200 ************ 2025-05-05 00:56:33.068898 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068902 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068907 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068912 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068917 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068922 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068927 | orchestrator | 2025-05-05 00:56:33.068932 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.068937 | orchestrator | Monday 05 May 2025 00:53:24 +0000 (0:00:00.573) 0:09:34.773 ************ 2025-05-05 00:56:33.068942 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068947 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068952 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.068957 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.068962 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.068966 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.068971 | orchestrator | 2025-05-05 00:56:33.068976 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.068981 | orchestrator | Monday 05 May 2025 00:53:25 +0000 (0:00:00.836) 0:09:35.610 ************ 2025-05-05 00:56:33.068986 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.068991 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.068996 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069001 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069006 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069011 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069016 | orchestrator | 2025-05-05 00:56:33.069020 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.069035 | orchestrator | Monday 05 May 2025 00:53:26 +0000 (0:00:00.602) 0:09:36.213 ************ 2025-05-05 00:56:33.069040 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069045 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069050 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069055 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069059 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069064 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069069 | orchestrator | 2025-05-05 00:56:33.069076 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.069081 | orchestrator | Monday 05 May 2025 00:53:26 +0000 (0:00:00.719) 0:09:36.932 ************ 2025-05-05 00:56:33.069086 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069091 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069099 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069104 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069109 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069114 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069118 | orchestrator | 2025-05-05 00:56:33.069123 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.069128 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.512) 0:09:37.445 ************ 2025-05-05 00:56:33.069133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.069138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.069143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.069148 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069153 | orchestrator | 2025-05-05 00:56:33.069158 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.069162 | orchestrator | Monday 05 May 2025 00:53:27 +0000 (0:00:00.354) 0:09:37.800 ************ 2025-05-05 00:56:33.069167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.069172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.069177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.069182 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069187 | orchestrator | 2025-05-05 00:56:33.069191 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.069196 | orchestrator | Monday 05 May 2025 00:53:28 +0000 (0:00:00.487) 0:09:38.288 ************ 2025-05-05 00:56:33.069201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.069206 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.069211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.069216 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069221 | orchestrator | 2025-05-05 00:56:33.069226 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.069231 | orchestrator | Monday 05 May 2025 00:53:28 +0000 (0:00:00.672) 0:09:38.960 ************ 2025-05-05 00:56:33.069236 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069241 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069246 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069250 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069255 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069260 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069265 | orchestrator | 2025-05-05 00:56:33.069270 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.069275 | orchestrator | Monday 05 May 2025 00:53:29 +0000 (0:00:00.531) 0:09:39.491 ************ 2025-05-05 00:56:33.069279 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.069284 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069292 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.069297 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069302 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.069322 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069342 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.069350 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069355 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.069360 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069365 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.069370 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069375 | orchestrator | 2025-05-05 00:56:33.069380 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.069385 | orchestrator | Monday 05 May 2025 00:53:30 +0000 (0:00:01.301) 0:09:40.793 ************ 2025-05-05 00:56:33.069390 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069397 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069402 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069407 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069412 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069417 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069421 | orchestrator | 2025-05-05 00:56:33.069426 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.069431 | orchestrator | Monday 05 May 2025 00:53:31 +0000 (0:00:00.540) 0:09:41.334 ************ 2025-05-05 00:56:33.069436 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069441 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069446 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069451 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069456 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069460 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069465 | orchestrator | 2025-05-05 00:56:33.069470 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.069475 | orchestrator | Monday 05 May 2025 00:53:31 +0000 (0:00:00.623) 0:09:41.957 ************ 2025-05-05 00:56:33.069480 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-05 00:56:33.069485 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-05 00:56:33.069490 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069495 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-05 00:56:33.069500 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069505 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.069510 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069514 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069519 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.069524 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069529 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.069534 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069539 | orchestrator | 2025-05-05 00:56:33.069544 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.069549 | orchestrator | Monday 05 May 2025 00:53:32 +0000 (0:00:00.957) 0:09:42.914 ************ 2025-05-05 00:56:33.069553 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069558 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069563 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069568 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.069573 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069578 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.069583 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069588 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.069593 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069598 | orchestrator | 2025-05-05 00:56:33.069603 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.069608 | orchestrator | Monday 05 May 2025 00:53:33 +0000 (0:00:01.052) 0:09:43.967 ************ 2025-05-05 00:56:33.069613 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-05 00:56:33.069618 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-05 00:56:33.069622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-05 00:56:33.069627 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069632 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-05 00:56:33.069637 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-05 00:56:33.069645 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-05 00:56:33.069649 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069654 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-05 00:56:33.069659 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-05 00:56:33.069664 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-05 00:56:33.069669 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.069678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.069683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.069688 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:56:33.069693 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:56:33.069698 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:56:33.069703 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069708 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069713 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:56:33.069720 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:56:33.069725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:56:33.069730 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069746 | orchestrator | 2025-05-05 00:56:33.069752 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.069757 | orchestrator | Monday 05 May 2025 00:53:35 +0000 (0:00:01.341) 0:09:45.308 ************ 2025-05-05 00:56:33.069762 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069767 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069772 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069777 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069782 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069787 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069792 | orchestrator | 2025-05-05 00:56:33.069797 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-05 00:56:33.069802 | orchestrator | Monday 05 May 2025 00:53:36 +0000 (0:00:01.192) 0:09:46.501 ************ 2025-05-05 00:56:33.069806 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069811 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069816 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069821 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.069826 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069831 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.069836 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069840 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.069845 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069850 | orchestrator | 2025-05-05 00:56:33.069855 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-05 00:56:33.069860 | orchestrator | Monday 05 May 2025 00:53:37 +0000 (0:00:01.076) 0:09:47.577 ************ 2025-05-05 00:56:33.069865 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069870 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069875 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069880 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069884 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069892 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069897 | orchestrator | 2025-05-05 00:56:33.069902 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-05 00:56:33.069907 | orchestrator | Monday 05 May 2025 00:53:38 +0000 (0:00:01.162) 0:09:48.740 ************ 2025-05-05 00:56:33.069912 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.069921 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.069926 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.069931 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.069936 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.069941 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.069945 | orchestrator | 2025-05-05 00:56:33.069950 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-05 00:56:33.069955 | orchestrator | Monday 05 May 2025 00:53:39 +0000 (0:00:01.184) 0:09:49.925 ************ 2025-05-05 00:56:33.069960 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.069965 | orchestrator | 2025-05-05 00:56:33.069972 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-05 00:56:33.069978 | orchestrator | Monday 05 May 2025 00:53:43 +0000 (0:00:03.306) 0:09:53.231 ************ 2025-05-05 00:56:33.069983 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.069987 | orchestrator | 2025-05-05 00:56:33.069992 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-05 00:56:33.069997 | orchestrator | Monday 05 May 2025 00:53:44 +0000 (0:00:01.731) 0:09:54.962 ************ 2025-05-05 00:56:33.070002 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.070007 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.070026 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.070032 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.070037 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.070042 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.070047 | orchestrator | 2025-05-05 00:56:33.070052 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-05 00:56:33.070057 | orchestrator | Monday 05 May 2025 00:53:46 +0000 (0:00:01.864) 0:09:56.827 ************ 2025-05-05 00:56:33.070062 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.070066 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.070071 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.070076 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.070081 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.070086 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.070091 | orchestrator | 2025-05-05 00:56:33.070095 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-05 00:56:33.070100 | orchestrator | Monday 05 May 2025 00:53:47 +0000 (0:00:00.979) 0:09:57.806 ************ 2025-05-05 00:56:33.070105 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.070111 | orchestrator | 2025-05-05 00:56:33.070116 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-05 00:56:33.070120 | orchestrator | Monday 05 May 2025 00:53:48 +0000 (0:00:01.365) 0:09:59.172 ************ 2025-05-05 00:56:33.070125 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.070130 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.070135 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.070140 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.070145 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.070150 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.070155 | orchestrator | 2025-05-05 00:56:33.070160 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-05 00:56:33.070164 | orchestrator | Monday 05 May 2025 00:53:50 +0000 (0:00:01.929) 0:10:01.101 ************ 2025-05-05 00:56:33.070169 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.070174 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.070179 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.070184 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.070189 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.070194 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.070198 | orchestrator | 2025-05-05 00:56:33.070203 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-05 00:56:33.070224 | orchestrator | Monday 05 May 2025 00:53:55 +0000 (0:00:04.097) 0:10:05.198 ************ 2025-05-05 00:56:33.070231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.070236 | orchestrator | 2025-05-05 00:56:33.070241 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-05 00:56:33.070246 | orchestrator | Monday 05 May 2025 00:53:56 +0000 (0:00:01.451) 0:10:06.650 ************ 2025-05-05 00:56:33.070251 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.070256 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.070261 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.070266 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070270 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070275 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070280 | orchestrator | 2025-05-05 00:56:33.070285 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-05 00:56:33.070290 | orchestrator | Monday 05 May 2025 00:53:57 +0000 (0:00:00.721) 0:10:07.371 ************ 2025-05-05 00:56:33.070295 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.070300 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.070314 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.070319 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.070324 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.070329 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.070334 | orchestrator | 2025-05-05 00:56:33.070339 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-05 00:56:33.070343 | orchestrator | Monday 05 May 2025 00:53:59 +0000 (0:00:02.488) 0:10:09.860 ************ 2025-05-05 00:56:33.070348 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.070353 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.070358 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.070363 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070368 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070375 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070380 | orchestrator | 2025-05-05 00:56:33.070385 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-05 00:56:33.070390 | orchestrator | 2025-05-05 00:56:33.070395 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-05 00:56:33.070400 | orchestrator | Monday 05 May 2025 00:54:02 +0000 (0:00:02.897) 0:10:12.758 ************ 2025-05-05 00:56:33.070405 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.070412 | orchestrator | 2025-05-05 00:56:33.070417 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-05 00:56:33.070422 | orchestrator | Monday 05 May 2025 00:54:03 +0000 (0:00:00.753) 0:10:13.512 ************ 2025-05-05 00:56:33.070427 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070432 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070437 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070442 | orchestrator | 2025-05-05 00:56:33.070447 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-05 00:56:33.070452 | orchestrator | Monday 05 May 2025 00:54:03 +0000 (0:00:00.332) 0:10:13.844 ************ 2025-05-05 00:56:33.070457 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070461 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070466 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070471 | orchestrator | 2025-05-05 00:56:33.070477 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-05 00:56:33.070481 | orchestrator | Monday 05 May 2025 00:54:04 +0000 (0:00:00.726) 0:10:14.571 ************ 2025-05-05 00:56:33.070486 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070491 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070496 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070501 | orchestrator | 2025-05-05 00:56:33.070511 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-05 00:56:33.070517 | orchestrator | Monday 05 May 2025 00:54:05 +0000 (0:00:00.992) 0:10:15.564 ************ 2025-05-05 00:56:33.070521 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070526 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070531 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070536 | orchestrator | 2025-05-05 00:56:33.070543 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-05 00:56:33.070548 | orchestrator | Monday 05 May 2025 00:54:06 +0000 (0:00:00.751) 0:10:16.315 ************ 2025-05-05 00:56:33.070553 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070558 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070563 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070568 | orchestrator | 2025-05-05 00:56:33.070573 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-05 00:56:33.070578 | orchestrator | Monday 05 May 2025 00:54:06 +0000 (0:00:00.349) 0:10:16.664 ************ 2025-05-05 00:56:33.070583 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070588 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070593 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070597 | orchestrator | 2025-05-05 00:56:33.070602 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-05 00:56:33.070607 | orchestrator | Monday 05 May 2025 00:54:06 +0000 (0:00:00.338) 0:10:17.003 ************ 2025-05-05 00:56:33.070612 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070617 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070622 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070627 | orchestrator | 2025-05-05 00:56:33.070632 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-05 00:56:33.070637 | orchestrator | Monday 05 May 2025 00:54:07 +0000 (0:00:00.656) 0:10:17.660 ************ 2025-05-05 00:56:33.070642 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070646 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070651 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070656 | orchestrator | 2025-05-05 00:56:33.070661 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-05 00:56:33.070666 | orchestrator | Monday 05 May 2025 00:54:07 +0000 (0:00:00.349) 0:10:18.010 ************ 2025-05-05 00:56:33.070671 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070689 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070695 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070700 | orchestrator | 2025-05-05 00:56:33.070705 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-05 00:56:33.070710 | orchestrator | Monday 05 May 2025 00:54:08 +0000 (0:00:00.355) 0:10:18.365 ************ 2025-05-05 00:56:33.070715 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070720 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070725 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070730 | orchestrator | 2025-05-05 00:56:33.070735 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-05 00:56:33.070740 | orchestrator | Monday 05 May 2025 00:54:08 +0000 (0:00:00.347) 0:10:18.712 ************ 2025-05-05 00:56:33.070745 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070750 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070754 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070759 | orchestrator | 2025-05-05 00:56:33.070764 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-05 00:56:33.070769 | orchestrator | Monday 05 May 2025 00:54:09 +0000 (0:00:01.009) 0:10:19.722 ************ 2025-05-05 00:56:33.070774 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070779 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070784 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070789 | orchestrator | 2025-05-05 00:56:33.070794 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-05 00:56:33.070802 | orchestrator | Monday 05 May 2025 00:54:09 +0000 (0:00:00.285) 0:10:20.007 ************ 2025-05-05 00:56:33.070807 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070812 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070817 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070821 | orchestrator | 2025-05-05 00:56:33.070826 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-05 00:56:33.070831 | orchestrator | Monday 05 May 2025 00:54:10 +0000 (0:00:00.295) 0:10:20.303 ************ 2025-05-05 00:56:33.070836 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070841 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070846 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070851 | orchestrator | 2025-05-05 00:56:33.070856 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-05 00:56:33.070861 | orchestrator | Monday 05 May 2025 00:54:10 +0000 (0:00:00.300) 0:10:20.604 ************ 2025-05-05 00:56:33.070866 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070871 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070876 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070880 | orchestrator | 2025-05-05 00:56:33.070885 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-05 00:56:33.070890 | orchestrator | Monday 05 May 2025 00:54:10 +0000 (0:00:00.473) 0:10:21.077 ************ 2025-05-05 00:56:33.070896 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.070901 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.070905 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.070910 | orchestrator | 2025-05-05 00:56:33.070915 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-05 00:56:33.070920 | orchestrator | Monday 05 May 2025 00:54:11 +0000 (0:00:00.291) 0:10:21.369 ************ 2025-05-05 00:56:33.070925 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070930 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070937 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070942 | orchestrator | 2025-05-05 00:56:33.070947 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-05 00:56:33.070952 | orchestrator | Monday 05 May 2025 00:54:11 +0000 (0:00:00.270) 0:10:21.639 ************ 2025-05-05 00:56:33.070957 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070962 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070967 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.070972 | orchestrator | 2025-05-05 00:56:33.070977 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-05 00:56:33.070982 | orchestrator | Monday 05 May 2025 00:54:11 +0000 (0:00:00.279) 0:10:21.919 ************ 2025-05-05 00:56:33.070987 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.070992 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.070997 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071002 | orchestrator | 2025-05-05 00:56:33.071007 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-05 00:56:33.071012 | orchestrator | Monday 05 May 2025 00:54:12 +0000 (0:00:00.435) 0:10:22.354 ************ 2025-05-05 00:56:33.071017 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.071021 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.071027 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.071031 | orchestrator | 2025-05-05 00:56:33.071039 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.071044 | orchestrator | Monday 05 May 2025 00:54:12 +0000 (0:00:00.304) 0:10:22.659 ************ 2025-05-05 00:56:33.071049 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071053 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071058 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071063 | orchestrator | 2025-05-05 00:56:33.071068 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.071073 | orchestrator | Monday 05 May 2025 00:54:12 +0000 (0:00:00.278) 0:10:22.937 ************ 2025-05-05 00:56:33.071081 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071086 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071091 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071096 | orchestrator | 2025-05-05 00:56:33.071101 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.071106 | orchestrator | Monday 05 May 2025 00:54:13 +0000 (0:00:00.309) 0:10:23.246 ************ 2025-05-05 00:56:33.071111 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071116 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071121 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071126 | orchestrator | 2025-05-05 00:56:33.071131 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.071135 | orchestrator | Monday 05 May 2025 00:54:13 +0000 (0:00:00.601) 0:10:23.848 ************ 2025-05-05 00:56:33.071141 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071145 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071164 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071170 | orchestrator | 2025-05-05 00:56:33.071175 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.071180 | orchestrator | Monday 05 May 2025 00:54:13 +0000 (0:00:00.336) 0:10:24.184 ************ 2025-05-05 00:56:33.071185 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071190 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071194 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071199 | orchestrator | 2025-05-05 00:56:33.071204 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.071209 | orchestrator | Monday 05 May 2025 00:54:14 +0000 (0:00:00.362) 0:10:24.547 ************ 2025-05-05 00:56:33.071214 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071219 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071224 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071228 | orchestrator | 2025-05-05 00:56:33.071233 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.071238 | orchestrator | Monday 05 May 2025 00:54:14 +0000 (0:00:00.315) 0:10:24.863 ************ 2025-05-05 00:56:33.071243 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071248 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071253 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071258 | orchestrator | 2025-05-05 00:56:33.071263 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.071268 | orchestrator | Monday 05 May 2025 00:54:15 +0000 (0:00:00.604) 0:10:25.467 ************ 2025-05-05 00:56:33.071273 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071278 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071283 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071288 | orchestrator | 2025-05-05 00:56:33.071293 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.071298 | orchestrator | Monday 05 May 2025 00:54:15 +0000 (0:00:00.334) 0:10:25.802 ************ 2025-05-05 00:56:33.071302 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071333 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071338 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071343 | orchestrator | 2025-05-05 00:56:33.071348 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.071353 | orchestrator | Monday 05 May 2025 00:54:15 +0000 (0:00:00.337) 0:10:26.139 ************ 2025-05-05 00:56:33.071359 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071364 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071369 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071373 | orchestrator | 2025-05-05 00:56:33.071378 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.071387 | orchestrator | Monday 05 May 2025 00:54:16 +0000 (0:00:00.314) 0:10:26.453 ************ 2025-05-05 00:56:33.071392 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071397 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071402 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071406 | orchestrator | 2025-05-05 00:56:33.071411 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.071416 | orchestrator | Monday 05 May 2025 00:54:16 +0000 (0:00:00.698) 0:10:27.152 ************ 2025-05-05 00:56:33.071421 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071426 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071431 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071436 | orchestrator | 2025-05-05 00:56:33.071441 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.071446 | orchestrator | Monday 05 May 2025 00:54:17 +0000 (0:00:00.352) 0:10:27.505 ************ 2025-05-05 00:56:33.071451 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.071455 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.071460 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071465 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.071470 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.071475 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071480 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.071487 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.071493 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071497 | orchestrator | 2025-05-05 00:56:33.071502 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.071507 | orchestrator | Monday 05 May 2025 00:54:17 +0000 (0:00:00.438) 0:10:27.944 ************ 2025-05-05 00:56:33.071512 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-05 00:56:33.071517 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-05 00:56:33.071522 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071527 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-05 00:56:33.071532 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-05 00:56:33.071536 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071544 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-05 00:56:33.071548 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-05 00:56:33.071553 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071558 | orchestrator | 2025-05-05 00:56:33.071563 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.071568 | orchestrator | Monday 05 May 2025 00:54:18 +0000 (0:00:00.356) 0:10:28.300 ************ 2025-05-05 00:56:33.071573 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071578 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071583 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071588 | orchestrator | 2025-05-05 00:56:33.071593 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.071598 | orchestrator | Monday 05 May 2025 00:54:18 +0000 (0:00:00.664) 0:10:28.965 ************ 2025-05-05 00:56:33.071602 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071621 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071627 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071632 | orchestrator | 2025-05-05 00:56:33.071637 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.071643 | orchestrator | Monday 05 May 2025 00:54:19 +0000 (0:00:00.355) 0:10:29.321 ************ 2025-05-05 00:56:33.071648 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071653 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071657 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071669 | orchestrator | 2025-05-05 00:56:33.071674 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.071679 | orchestrator | Monday 05 May 2025 00:54:19 +0000 (0:00:00.332) 0:10:29.653 ************ 2025-05-05 00:56:33.071684 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071689 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071693 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071698 | orchestrator | 2025-05-05 00:56:33.071703 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.071710 | orchestrator | Monday 05 May 2025 00:54:19 +0000 (0:00:00.321) 0:10:29.975 ************ 2025-05-05 00:56:33.071715 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071720 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071725 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071730 | orchestrator | 2025-05-05 00:56:33.071735 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.071740 | orchestrator | Monday 05 May 2025 00:54:20 +0000 (0:00:00.665) 0:10:30.641 ************ 2025-05-05 00:56:33.071745 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071749 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071754 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071759 | orchestrator | 2025-05-05 00:56:33.071764 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.071769 | orchestrator | Monday 05 May 2025 00:54:20 +0000 (0:00:00.379) 0:10:31.021 ************ 2025-05-05 00:56:33.071774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.071779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.071783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.071788 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071793 | orchestrator | 2025-05-05 00:56:33.071798 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.071803 | orchestrator | Monday 05 May 2025 00:54:21 +0000 (0:00:00.484) 0:10:31.505 ************ 2025-05-05 00:56:33.071808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.071813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.071818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.071823 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071828 | orchestrator | 2025-05-05 00:56:33.071833 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.071838 | orchestrator | Monday 05 May 2025 00:54:21 +0000 (0:00:00.431) 0:10:31.936 ************ 2025-05-05 00:56:33.071843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.071848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.071853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.071858 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071863 | orchestrator | 2025-05-05 00:56:33.071868 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.071873 | orchestrator | Monday 05 May 2025 00:54:22 +0000 (0:00:00.448) 0:10:32.385 ************ 2025-05-05 00:56:33.071878 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071883 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071888 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071893 | orchestrator | 2025-05-05 00:56:33.071897 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.071902 | orchestrator | Monday 05 May 2025 00:54:22 +0000 (0:00:00.352) 0:10:32.738 ************ 2025-05-05 00:56:33.071907 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.071912 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.071917 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071926 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071931 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.071936 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071941 | orchestrator | 2025-05-05 00:56:33.071946 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.071951 | orchestrator | Monday 05 May 2025 00:54:23 +0000 (0:00:00.860) 0:10:33.598 ************ 2025-05-05 00:56:33.071955 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071960 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071965 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.071970 | orchestrator | 2025-05-05 00:56:33.071975 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.071980 | orchestrator | Monday 05 May 2025 00:54:23 +0000 (0:00:00.337) 0:10:33.935 ************ 2025-05-05 00:56:33.071985 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.071990 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.071995 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072000 | orchestrator | 2025-05-05 00:56:33.072005 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.072010 | orchestrator | Monday 05 May 2025 00:54:24 +0000 (0:00:00.322) 0:10:34.257 ************ 2025-05-05 00:56:33.072015 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.072020 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072025 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.072030 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072035 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.072040 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072045 | orchestrator | 2025-05-05 00:56:33.072062 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.072068 | orchestrator | Monday 05 May 2025 00:54:24 +0000 (0:00:00.457) 0:10:34.715 ************ 2025-05-05 00:56:33.072073 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.072078 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072084 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.072089 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072094 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.072099 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072104 | orchestrator | 2025-05-05 00:56:33.072109 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.072114 | orchestrator | Monday 05 May 2025 00:54:25 +0000 (0:00:00.729) 0:10:35.444 ************ 2025-05-05 00:56:33.072119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.072124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.072129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.072134 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:56:33.072139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:56:33.072143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:56:33.072148 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072153 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:56:33.072163 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:56:33.072168 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:56:33.072173 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072178 | orchestrator | 2025-05-05 00:56:33.072187 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.072192 | orchestrator | Monday 05 May 2025 00:54:25 +0000 (0:00:00.714) 0:10:36.158 ************ 2025-05-05 00:56:33.072197 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072202 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072207 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072211 | orchestrator | 2025-05-05 00:56:33.072216 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-05 00:56:33.072221 | orchestrator | Monday 05 May 2025 00:54:26 +0000 (0:00:00.862) 0:10:37.021 ************ 2025-05-05 00:56:33.072226 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.072231 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072236 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.072242 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072247 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.072251 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072256 | orchestrator | 2025-05-05 00:56:33.072261 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-05 00:56:33.072266 | orchestrator | Monday 05 May 2025 00:54:27 +0000 (0:00:00.525) 0:10:37.546 ************ 2025-05-05 00:56:33.072271 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072276 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072282 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072287 | orchestrator | 2025-05-05 00:56:33.072292 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-05 00:56:33.072297 | orchestrator | Monday 05 May 2025 00:54:28 +0000 (0:00:00.684) 0:10:38.231 ************ 2025-05-05 00:56:33.072302 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072319 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072324 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072329 | orchestrator | 2025-05-05 00:56:33.072334 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-05 00:56:33.072342 | orchestrator | Monday 05 May 2025 00:54:28 +0000 (0:00:00.462) 0:10:38.694 ************ 2025-05-05 00:56:33.072347 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072352 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072357 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-05 00:56:33.072362 | orchestrator | 2025-05-05 00:56:33.072367 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-05 00:56:33.072372 | orchestrator | Monday 05 May 2025 00:54:28 +0000 (0:00:00.308) 0:10:39.002 ************ 2025-05-05 00:56:33.072377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-05 00:56:33.072382 | orchestrator | 2025-05-05 00:56:33.072387 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-05 00:56:33.072392 | orchestrator | Monday 05 May 2025 00:54:30 +0000 (0:00:01.801) 0:10:40.804 ************ 2025-05-05 00:56:33.072398 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-05 00:56:33.072409 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072414 | orchestrator | 2025-05-05 00:56:33.072419 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-05 00:56:33.072423 | orchestrator | Monday 05 May 2025 00:54:30 +0000 (0:00:00.329) 0:10:41.133 ************ 2025-05-05 00:56:33.072442 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-05 00:56:33.072450 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-05 00:56:33.072458 | orchestrator | 2025-05-05 00:56:33.072464 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-05 00:56:33.072469 | orchestrator | Monday 05 May 2025 00:54:37 +0000 (0:00:07.004) 0:10:48.138 ************ 2025-05-05 00:56:33.072474 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-05 00:56:33.072478 | orchestrator | 2025-05-05 00:56:33.072483 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-05 00:56:33.072488 | orchestrator | Monday 05 May 2025 00:54:40 +0000 (0:00:02.885) 0:10:51.024 ************ 2025-05-05 00:56:33.072493 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.072498 | orchestrator | 2025-05-05 00:56:33.072503 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-05 00:56:33.072508 | orchestrator | Monday 05 May 2025 00:54:41 +0000 (0:00:00.749) 0:10:51.773 ************ 2025-05-05 00:56:33.072514 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-05 00:56:33.072523 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-05 00:56:33.072531 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-05 00:56:33.072540 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-05 00:56:33.072548 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-05 00:56:33.072556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-05 00:56:33.072564 | orchestrator | 2025-05-05 00:56:33.072573 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-05 00:56:33.072583 | orchestrator | Monday 05 May 2025 00:54:42 +0000 (0:00:01.082) 0:10:52.855 ************ 2025-05-05 00:56:33.072590 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:56:33.072598 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.072604 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-05 00:56:33.072618 | orchestrator | 2025-05-05 00:56:33.072628 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-05 00:56:33.072635 | orchestrator | Monday 05 May 2025 00:54:44 +0000 (0:00:01.766) 0:10:54.622 ************ 2025-05-05 00:56:33.072642 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-05 00:56:33.072649 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.072656 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.072663 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-05 00:56:33.072670 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.072678 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.072685 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-05 00:56:33.072692 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.072699 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.072706 | orchestrator | 2025-05-05 00:56:33.072714 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-05 00:56:33.072721 | orchestrator | Monday 05 May 2025 00:54:45 +0000 (0:00:01.218) 0:10:55.841 ************ 2025-05-05 00:56:33.072729 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.072736 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.072743 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.072751 | orchestrator | 2025-05-05 00:56:33.072758 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-05 00:56:33.072766 | orchestrator | Monday 05 May 2025 00:54:46 +0000 (0:00:00.557) 0:10:56.398 ************ 2025-05-05 00:56:33.072773 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.072790 | orchestrator | 2025-05-05 00:56:33.072799 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-05 00:56:33.072808 | orchestrator | Monday 05 May 2025 00:54:46 +0000 (0:00:00.597) 0:10:56.996 ************ 2025-05-05 00:56:33.072816 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.072825 | orchestrator | 2025-05-05 00:56:33.072833 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-05 00:56:33.072841 | orchestrator | Monday 05 May 2025 00:54:47 +0000 (0:00:00.737) 0:10:57.733 ************ 2025-05-05 00:56:33.072848 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.072857 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.072865 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.072873 | orchestrator | 2025-05-05 00:56:33.072881 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-05 00:56:33.072889 | orchestrator | Monday 05 May 2025 00:54:48 +0000 (0:00:01.161) 0:10:58.895 ************ 2025-05-05 00:56:33.072911 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.072919 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.072927 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.072935 | orchestrator | 2025-05-05 00:56:33.072947 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-05 00:56:33.072955 | orchestrator | Monday 05 May 2025 00:54:49 +0000 (0:00:01.193) 0:11:00.089 ************ 2025-05-05 00:56:33.072989 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.072995 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.073001 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.073006 | orchestrator | 2025-05-05 00:56:33.073011 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-05 00:56:33.073016 | orchestrator | Monday 05 May 2025 00:54:51 +0000 (0:00:02.023) 0:11:02.112 ************ 2025-05-05 00:56:33.073021 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.073026 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.073031 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.073036 | orchestrator | 2025-05-05 00:56:33.073041 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-05 00:56:33.073046 | orchestrator | Monday 05 May 2025 00:54:53 +0000 (0:00:01.854) 0:11:03.967 ************ 2025-05-05 00:56:33.073051 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-05 00:56:33.073056 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-05 00:56:33.073061 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-05 00:56:33.073066 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073071 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073076 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073081 | orchestrator | 2025-05-05 00:56:33.073086 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-05 00:56:33.073091 | orchestrator | Monday 05 May 2025 00:55:10 +0000 (0:00:17.033) 0:11:21.000 ************ 2025-05-05 00:56:33.073096 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.073100 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.073132 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.073138 | orchestrator | 2025-05-05 00:56:33.073144 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-05 00:56:33.073149 | orchestrator | Monday 05 May 2025 00:55:11 +0000 (0:00:00.688) 0:11:21.689 ************ 2025-05-05 00:56:33.073154 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.073159 | orchestrator | 2025-05-05 00:56:33.073164 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-05 00:56:33.073174 | orchestrator | Monday 05 May 2025 00:55:12 +0000 (0:00:00.722) 0:11:22.412 ************ 2025-05-05 00:56:33.073179 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073184 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073189 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073194 | orchestrator | 2025-05-05 00:56:33.073199 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-05 00:56:33.073204 | orchestrator | Monday 05 May 2025 00:55:12 +0000 (0:00:00.351) 0:11:22.763 ************ 2025-05-05 00:56:33.073208 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.073213 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.073218 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.073223 | orchestrator | 2025-05-05 00:56:33.073228 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-05 00:56:33.073233 | orchestrator | Monday 05 May 2025 00:55:13 +0000 (0:00:01.184) 0:11:23.948 ************ 2025-05-05 00:56:33.073238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.073243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.073248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.073253 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073258 | orchestrator | 2025-05-05 00:56:33.073263 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-05 00:56:33.073268 | orchestrator | Monday 05 May 2025 00:55:14 +0000 (0:00:01.159) 0:11:25.108 ************ 2025-05-05 00:56:33.073273 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073278 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073283 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073288 | orchestrator | 2025-05-05 00:56:33.073293 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.073298 | orchestrator | Monday 05 May 2025 00:55:15 +0000 (0:00:00.357) 0:11:25.465 ************ 2025-05-05 00:56:33.073303 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.073338 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.073343 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.073348 | orchestrator | 2025-05-05 00:56:33.073354 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-05 00:56:33.073362 | orchestrator | 2025-05-05 00:56:33.073370 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-05 00:56:33.073378 | orchestrator | Monday 05 May 2025 00:55:17 +0000 (0:00:01.912) 0:11:27.378 ************ 2025-05-05 00:56:33.073385 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.073399 | orchestrator | 2025-05-05 00:56:33.073406 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-05 00:56:33.073414 | orchestrator | Monday 05 May 2025 00:55:17 +0000 (0:00:00.682) 0:11:28.060 ************ 2025-05-05 00:56:33.073422 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073428 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073432 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073438 | orchestrator | 2025-05-05 00:56:33.073442 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-05 00:56:33.073448 | orchestrator | Monday 05 May 2025 00:55:18 +0000 (0:00:00.324) 0:11:28.385 ************ 2025-05-05 00:56:33.073452 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073458 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073462 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073468 | orchestrator | 2025-05-05 00:56:33.073473 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-05 00:56:33.073477 | orchestrator | Monday 05 May 2025 00:55:18 +0000 (0:00:00.718) 0:11:29.103 ************ 2025-05-05 00:56:33.073482 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073490 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073516 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073523 | orchestrator | 2025-05-05 00:56:33.073532 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-05 00:56:33.073537 | orchestrator | Monday 05 May 2025 00:55:19 +0000 (0:00:00.792) 0:11:29.895 ************ 2025-05-05 00:56:33.073542 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073547 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073552 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073557 | orchestrator | 2025-05-05 00:56:33.073562 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-05 00:56:33.073567 | orchestrator | Monday 05 May 2025 00:55:20 +0000 (0:00:00.760) 0:11:30.656 ************ 2025-05-05 00:56:33.073572 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073577 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073582 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073587 | orchestrator | 2025-05-05 00:56:33.073592 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-05 00:56:33.073597 | orchestrator | Monday 05 May 2025 00:55:20 +0000 (0:00:00.337) 0:11:30.993 ************ 2025-05-05 00:56:33.073601 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073606 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073611 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073616 | orchestrator | 2025-05-05 00:56:33.073621 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-05 00:56:33.073626 | orchestrator | Monday 05 May 2025 00:55:21 +0000 (0:00:00.326) 0:11:31.320 ************ 2025-05-05 00:56:33.073631 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073636 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073641 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073646 | orchestrator | 2025-05-05 00:56:33.073651 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-05 00:56:33.073656 | orchestrator | Monday 05 May 2025 00:55:21 +0000 (0:00:00.623) 0:11:31.944 ************ 2025-05-05 00:56:33.073661 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073666 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073671 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073676 | orchestrator | 2025-05-05 00:56:33.073681 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-05 00:56:33.073686 | orchestrator | Monday 05 May 2025 00:55:22 +0000 (0:00:00.320) 0:11:32.265 ************ 2025-05-05 00:56:33.073691 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073696 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073701 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073706 | orchestrator | 2025-05-05 00:56:33.073711 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-05 00:56:33.073716 | orchestrator | Monday 05 May 2025 00:55:22 +0000 (0:00:00.357) 0:11:32.622 ************ 2025-05-05 00:56:33.073721 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073726 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073731 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073735 | orchestrator | 2025-05-05 00:56:33.073740 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-05 00:56:33.073745 | orchestrator | Monday 05 May 2025 00:55:22 +0000 (0:00:00.361) 0:11:32.983 ************ 2025-05-05 00:56:33.073750 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073755 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073760 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073765 | orchestrator | 2025-05-05 00:56:33.073770 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-05 00:56:33.073775 | orchestrator | Monday 05 May 2025 00:55:23 +0000 (0:00:01.016) 0:11:34.000 ************ 2025-05-05 00:56:33.073780 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073785 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073790 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073795 | orchestrator | 2025-05-05 00:56:33.073800 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-05 00:56:33.073808 | orchestrator | Monday 05 May 2025 00:55:24 +0000 (0:00:00.356) 0:11:34.356 ************ 2025-05-05 00:56:33.073813 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073818 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073823 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073828 | orchestrator | 2025-05-05 00:56:33.073833 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-05 00:56:33.073838 | orchestrator | Monday 05 May 2025 00:55:24 +0000 (0:00:00.353) 0:11:34.710 ************ 2025-05-05 00:56:33.073843 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073850 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073858 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073867 | orchestrator | 2025-05-05 00:56:33.073876 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-05 00:56:33.073885 | orchestrator | Monday 05 May 2025 00:55:24 +0000 (0:00:00.371) 0:11:35.081 ************ 2025-05-05 00:56:33.073890 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073895 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073900 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073905 | orchestrator | 2025-05-05 00:56:33.073910 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-05 00:56:33.073915 | orchestrator | Monday 05 May 2025 00:55:25 +0000 (0:00:00.853) 0:11:35.935 ************ 2025-05-05 00:56:33.073920 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.073925 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.073929 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.073934 | orchestrator | 2025-05-05 00:56:33.073939 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-05 00:56:33.073944 | orchestrator | Monday 05 May 2025 00:55:26 +0000 (0:00:00.408) 0:11:36.344 ************ 2025-05-05 00:56:33.073949 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073954 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073959 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073964 | orchestrator | 2025-05-05 00:56:33.073969 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-05 00:56:33.073974 | orchestrator | Monday 05 May 2025 00:55:26 +0000 (0:00:00.349) 0:11:36.694 ************ 2025-05-05 00:56:33.073979 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.073984 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.073989 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.073994 | orchestrator | 2025-05-05 00:56:33.074011 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-05 00:56:33.074034 | orchestrator | Monday 05 May 2025 00:55:26 +0000 (0:00:00.344) 0:11:37.038 ************ 2025-05-05 00:56:33.074039 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074044 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074049 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074054 | orchestrator | 2025-05-05 00:56:33.074062 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-05 00:56:33.074067 | orchestrator | Monday 05 May 2025 00:55:27 +0000 (0:00:00.641) 0:11:37.680 ************ 2025-05-05 00:56:33.074072 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.074079 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.074084 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.074089 | orchestrator | 2025-05-05 00:56:33.074093 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-05 00:56:33.074098 | orchestrator | Monday 05 May 2025 00:55:27 +0000 (0:00:00.316) 0:11:37.996 ************ 2025-05-05 00:56:33.074103 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074108 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074113 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074118 | orchestrator | 2025-05-05 00:56:33.074123 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-05 00:56:33.074128 | orchestrator | Monday 05 May 2025 00:55:28 +0000 (0:00:00.299) 0:11:38.295 ************ 2025-05-05 00:56:33.074136 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074141 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074146 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074151 | orchestrator | 2025-05-05 00:56:33.074156 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-05 00:56:33.074161 | orchestrator | Monday 05 May 2025 00:55:28 +0000 (0:00:00.303) 0:11:38.599 ************ 2025-05-05 00:56:33.074189 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074195 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074200 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074205 | orchestrator | 2025-05-05 00:56:33.074209 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-05 00:56:33.074214 | orchestrator | Monday 05 May 2025 00:55:28 +0000 (0:00:00.480) 0:11:39.080 ************ 2025-05-05 00:56:33.074219 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074224 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074229 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074234 | orchestrator | 2025-05-05 00:56:33.074239 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-05 00:56:33.074244 | orchestrator | Monday 05 May 2025 00:55:29 +0000 (0:00:00.320) 0:11:39.401 ************ 2025-05-05 00:56:33.074249 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074254 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074259 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074264 | orchestrator | 2025-05-05 00:56:33.074269 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-05 00:56:33.074274 | orchestrator | Monday 05 May 2025 00:55:29 +0000 (0:00:00.324) 0:11:39.725 ************ 2025-05-05 00:56:33.074279 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074284 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074289 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074294 | orchestrator | 2025-05-05 00:56:33.074298 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-05 00:56:33.074318 | orchestrator | Monday 05 May 2025 00:55:29 +0000 (0:00:00.325) 0:11:40.051 ************ 2025-05-05 00:56:33.074328 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074333 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074338 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074343 | orchestrator | 2025-05-05 00:56:33.074348 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-05 00:56:33.074354 | orchestrator | Monday 05 May 2025 00:55:30 +0000 (0:00:00.478) 0:11:40.529 ************ 2025-05-05 00:56:33.074359 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074364 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074369 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074374 | orchestrator | 2025-05-05 00:56:33.074379 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-05 00:56:33.074384 | orchestrator | Monday 05 May 2025 00:55:30 +0000 (0:00:00.299) 0:11:40.829 ************ 2025-05-05 00:56:33.074389 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074394 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074399 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074404 | orchestrator | 2025-05-05 00:56:33.074409 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-05 00:56:33.074414 | orchestrator | Monday 05 May 2025 00:55:30 +0000 (0:00:00.302) 0:11:41.132 ************ 2025-05-05 00:56:33.074419 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074426 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074434 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074442 | orchestrator | 2025-05-05 00:56:33.074449 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-05 00:56:33.074462 | orchestrator | Monday 05 May 2025 00:55:31 +0000 (0:00:00.296) 0:11:41.428 ************ 2025-05-05 00:56:33.074470 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074477 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074485 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074492 | orchestrator | 2025-05-05 00:56:33.074501 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-05 00:56:33.074509 | orchestrator | Monday 05 May 2025 00:55:31 +0000 (0:00:00.518) 0:11:41.947 ************ 2025-05-05 00:56:33.074517 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074525 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074533 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074541 | orchestrator | 2025-05-05 00:56:33.074549 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-05 00:56:33.074561 | orchestrator | Monday 05 May 2025 00:55:32 +0000 (0:00:00.328) 0:11:42.276 ************ 2025-05-05 00:56:33.074569 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.074576 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-05 00:56:33.074585 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074590 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.074595 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-05 00:56:33.074600 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074605 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.074610 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-05 00:56:33.074615 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074620 | orchestrator | 2025-05-05 00:56:33.074625 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-05 00:56:33.074630 | orchestrator | Monday 05 May 2025 00:55:32 +0000 (0:00:00.368) 0:11:42.645 ************ 2025-05-05 00:56:33.074635 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-05 00:56:33.074642 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-05 00:56:33.074648 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074652 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-05 00:56:33.074657 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-05 00:56:33.074662 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074667 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-05 00:56:33.074672 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-05 00:56:33.074677 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074681 | orchestrator | 2025-05-05 00:56:33.074686 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-05 00:56:33.074692 | orchestrator | Monday 05 May 2025 00:55:32 +0000 (0:00:00.345) 0:11:42.990 ************ 2025-05-05 00:56:33.074696 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074701 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074706 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074711 | orchestrator | 2025-05-05 00:56:33.074715 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-05 00:56:33.074720 | orchestrator | Monday 05 May 2025 00:55:33 +0000 (0:00:00.445) 0:11:43.435 ************ 2025-05-05 00:56:33.074725 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074730 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074735 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074739 | orchestrator | 2025-05-05 00:56:33.074744 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:56:33.074750 | orchestrator | Monday 05 May 2025 00:55:33 +0000 (0:00:00.305) 0:11:43.741 ************ 2025-05-05 00:56:33.074755 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074762 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074767 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074780 | orchestrator | 2025-05-05 00:56:33.074785 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:56:33.074790 | orchestrator | Monday 05 May 2025 00:55:33 +0000 (0:00:00.366) 0:11:44.107 ************ 2025-05-05 00:56:33.074795 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074799 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074804 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074809 | orchestrator | 2025-05-05 00:56:33.074814 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:56:33.074819 | orchestrator | Monday 05 May 2025 00:55:34 +0000 (0:00:00.310) 0:11:44.418 ************ 2025-05-05 00:56:33.074823 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074828 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074833 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074838 | orchestrator | 2025-05-05 00:56:33.074842 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:56:33.074847 | orchestrator | Monday 05 May 2025 00:55:34 +0000 (0:00:00.488) 0:11:44.907 ************ 2025-05-05 00:56:33.074852 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074857 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074861 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.074866 | orchestrator | 2025-05-05 00:56:33.074871 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:56:33.074876 | orchestrator | Monday 05 May 2025 00:55:35 +0000 (0:00:00.288) 0:11:45.195 ************ 2025-05-05 00:56:33.074881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.074885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.074890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.074895 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074900 | orchestrator | 2025-05-05 00:56:33.074905 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:56:33.074909 | orchestrator | Monday 05 May 2025 00:55:35 +0000 (0:00:00.390) 0:11:45.586 ************ 2025-05-05 00:56:33.074914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.074919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.074924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.074929 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074934 | orchestrator | 2025-05-05 00:56:33.074939 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:56:33.074944 | orchestrator | Monday 05 May 2025 00:55:35 +0000 (0:00:00.417) 0:11:46.004 ************ 2025-05-05 00:56:33.074948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.074953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.074958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.074963 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074968 | orchestrator | 2025-05-05 00:56:33.074972 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.074981 | orchestrator | Monday 05 May 2025 00:55:36 +0000 (0:00:00.423) 0:11:46.428 ************ 2025-05-05 00:56:33.074986 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.074991 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.074996 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075001 | orchestrator | 2025-05-05 00:56:33.075006 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:56:33.075010 | orchestrator | Monday 05 May 2025 00:55:36 +0000 (0:00:00.326) 0:11:46.754 ************ 2025-05-05 00:56:33.075015 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.075020 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075025 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.075033 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075038 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.075042 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075047 | orchestrator | 2025-05-05 00:56:33.075052 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:56:33.075057 | orchestrator | Monday 05 May 2025 00:55:37 +0000 (0:00:00.872) 0:11:47.627 ************ 2025-05-05 00:56:33.075062 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075067 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075072 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075077 | orchestrator | 2025-05-05 00:56:33.075082 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:56:33.075086 | orchestrator | Monday 05 May 2025 00:55:37 +0000 (0:00:00.342) 0:11:47.969 ************ 2025-05-05 00:56:33.075091 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075096 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075101 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075106 | orchestrator | 2025-05-05 00:56:33.075110 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:56:33.075115 | orchestrator | Monday 05 May 2025 00:55:38 +0000 (0:00:00.352) 0:11:48.322 ************ 2025-05-05 00:56:33.075120 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:56:33.075125 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075130 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:56:33.075135 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075140 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:56:33.075144 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075149 | orchestrator | 2025-05-05 00:56:33.075154 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:56:33.075159 | orchestrator | Monday 05 May 2025 00:55:38 +0000 (0:00:00.476) 0:11:48.798 ************ 2025-05-05 00:56:33.075164 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.075172 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075179 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.075187 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075195 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:56:33.075202 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075210 | orchestrator | 2025-05-05 00:56:33.075218 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:56:33.075226 | orchestrator | Monday 05 May 2025 00:55:39 +0000 (0:00:00.634) 0:11:49.433 ************ 2025-05-05 00:56:33.075231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.075236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.075241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.075246 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075251 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:56:33.075255 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:56:33.075260 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:56:33.075265 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075270 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:56:33.075274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:56:33.075279 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:56:33.075284 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075289 | orchestrator | 2025-05-05 00:56:33.075297 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-05 00:56:33.075302 | orchestrator | Monday 05 May 2025 00:55:39 +0000 (0:00:00.613) 0:11:50.047 ************ 2025-05-05 00:56:33.075338 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075344 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075349 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075354 | orchestrator | 2025-05-05 00:56:33.075359 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-05 00:56:33.075364 | orchestrator | Monday 05 May 2025 00:55:40 +0000 (0:00:00.807) 0:11:50.854 ************ 2025-05-05 00:56:33.075369 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.075374 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075378 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.075383 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075388 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.075393 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075398 | orchestrator | 2025-05-05 00:56:33.075403 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-05 00:56:33.075408 | orchestrator | Monday 05 May 2025 00:55:41 +0000 (0:00:00.651) 0:11:51.506 ************ 2025-05-05 00:56:33.075413 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075423 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075428 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075433 | orchestrator | 2025-05-05 00:56:33.075438 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-05 00:56:33.075443 | orchestrator | Monday 05 May 2025 00:55:42 +0000 (0:00:00.785) 0:11:52.292 ************ 2025-05-05 00:56:33.075448 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075453 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075458 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075463 | orchestrator | 2025-05-05 00:56:33.075468 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-05 00:56:33.075473 | orchestrator | Monday 05 May 2025 00:55:42 +0000 (0:00:00.533) 0:11:52.825 ************ 2025-05-05 00:56:33.075478 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.075482 | orchestrator | 2025-05-05 00:56:33.075487 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-05 00:56:33.075492 | orchestrator | Monday 05 May 2025 00:55:43 +0000 (0:00:00.777) 0:11:53.603 ************ 2025-05-05 00:56:33.075497 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-05 00:56:33.075502 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-05 00:56:33.075507 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-05 00:56:33.075512 | orchestrator | 2025-05-05 00:56:33.075517 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-05 00:56:33.075522 | orchestrator | Monday 05 May 2025 00:55:44 +0000 (0:00:00.702) 0:11:54.305 ************ 2025-05-05 00:56:33.075527 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:56:33.075532 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.075536 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-05 00:56:33.075541 | orchestrator | 2025-05-05 00:56:33.075546 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-05 00:56:33.075551 | orchestrator | Monday 05 May 2025 00:55:45 +0000 (0:00:01.800) 0:11:56.105 ************ 2025-05-05 00:56:33.075556 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-05 00:56:33.075561 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-05 00:56:33.075566 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.075571 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-05 00:56:33.075576 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-05 00:56:33.075585 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.075590 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-05 00:56:33.075595 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-05 00:56:33.075600 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.075604 | orchestrator | 2025-05-05 00:56:33.075609 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-05 00:56:33.075614 | orchestrator | Monday 05 May 2025 00:55:47 +0000 (0:00:01.160) 0:11:57.266 ************ 2025-05-05 00:56:33.075619 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075624 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075630 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075635 | orchestrator | 2025-05-05 00:56:33.075640 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-05 00:56:33.075645 | orchestrator | Monday 05 May 2025 00:55:47 +0000 (0:00:00.562) 0:11:57.828 ************ 2025-05-05 00:56:33.075650 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075655 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075660 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075664 | orchestrator | 2025-05-05 00:56:33.075670 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-05 00:56:33.075674 | orchestrator | Monday 05 May 2025 00:55:47 +0000 (0:00:00.333) 0:11:58.161 ************ 2025-05-05 00:56:33.075679 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-05 00:56:33.075685 | orchestrator | 2025-05-05 00:56:33.075690 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-05 00:56:33.075694 | orchestrator | Monday 05 May 2025 00:55:48 +0000 (0:00:00.230) 0:11:58.392 ************ 2025-05-05 00:56:33.075699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075727 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075732 | orchestrator | 2025-05-05 00:56:33.075737 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-05 00:56:33.075742 | orchestrator | Monday 05 May 2025 00:55:49 +0000 (0:00:00.909) 0:11:59.302 ************ 2025-05-05 00:56:33.075747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075777 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075782 | orchestrator | 2025-05-05 00:56:33.075787 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-05 00:56:33.075792 | orchestrator | Monday 05 May 2025 00:55:50 +0000 (0:00:00.891) 0:12:00.193 ************ 2025-05-05 00:56:33.075800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-05 00:56:33.075825 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075830 | orchestrator | 2025-05-05 00:56:33.075835 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-05 00:56:33.075840 | orchestrator | Monday 05 May 2025 00:55:50 +0000 (0:00:00.658) 0:12:00.852 ************ 2025-05-05 00:56:33.075845 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-05 00:56:33.075851 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-05 00:56:33.075856 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-05 00:56:33.075861 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-05 00:56:33.075866 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-05 00:56:33.075871 | orchestrator | 2025-05-05 00:56:33.075876 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-05 00:56:33.075881 | orchestrator | Monday 05 May 2025 00:56:14 +0000 (0:00:23.632) 0:12:24.484 ************ 2025-05-05 00:56:33.075886 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075891 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075896 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075900 | orchestrator | 2025-05-05 00:56:33.075905 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-05 00:56:33.075910 | orchestrator | Monday 05 May 2025 00:56:14 +0000 (0:00:00.477) 0:12:24.962 ************ 2025-05-05 00:56:33.075915 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.075920 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.075925 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.075930 | orchestrator | 2025-05-05 00:56:33.075935 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-05 00:56:33.075940 | orchestrator | Monday 05 May 2025 00:56:15 +0000 (0:00:00.341) 0:12:25.303 ************ 2025-05-05 00:56:33.075945 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.075950 | orchestrator | 2025-05-05 00:56:33.075957 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-05 00:56:33.075962 | orchestrator | Monday 05 May 2025 00:56:15 +0000 (0:00:00.568) 0:12:25.872 ************ 2025-05-05 00:56:33.075967 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.075972 | orchestrator | 2025-05-05 00:56:33.075977 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-05 00:56:33.075982 | orchestrator | Monday 05 May 2025 00:56:16 +0000 (0:00:00.792) 0:12:26.664 ************ 2025-05-05 00:56:33.075990 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.075995 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.076000 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.076005 | orchestrator | 2025-05-05 00:56:33.076009 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-05 00:56:33.076014 | orchestrator | Monday 05 May 2025 00:56:17 +0000 (0:00:01.180) 0:12:27.845 ************ 2025-05-05 00:56:33.076019 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.076024 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.076029 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.076034 | orchestrator | 2025-05-05 00:56:33.076039 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-05 00:56:33.076046 | orchestrator | Monday 05 May 2025 00:56:18 +0000 (0:00:01.120) 0:12:28.965 ************ 2025-05-05 00:56:33.076051 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.076056 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.076061 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.076066 | orchestrator | 2025-05-05 00:56:33.076071 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-05 00:56:33.076075 | orchestrator | Monday 05 May 2025 00:56:20 +0000 (0:00:02.011) 0:12:30.977 ************ 2025-05-05 00:56:33.076080 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.076085 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.076090 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-05 00:56:33.076095 | orchestrator | 2025-05-05 00:56:33.076100 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-05 00:56:33.076105 | orchestrator | Monday 05 May 2025 00:56:22 +0000 (0:00:01.874) 0:12:32.851 ************ 2025-05-05 00:56:33.076110 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.076115 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:56:33.076120 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:56:33.076125 | orchestrator | 2025-05-05 00:56:33.076129 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-05 00:56:33.076134 | orchestrator | Monday 05 May 2025 00:56:23 +0000 (0:00:01.170) 0:12:34.022 ************ 2025-05-05 00:56:33.076139 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.076144 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.076149 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.076154 | orchestrator | 2025-05-05 00:56:33.076158 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-05 00:56:33.076163 | orchestrator | Monday 05 May 2025 00:56:24 +0000 (0:00:00.704) 0:12:34.726 ************ 2025-05-05 00:56:33.076169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:56:33.076173 | orchestrator | 2025-05-05 00:56:33.076178 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-05 00:56:33.076183 | orchestrator | Monday 05 May 2025 00:56:25 +0000 (0:00:00.806) 0:12:35.533 ************ 2025-05-05 00:56:33.076188 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.076193 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.076198 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.076203 | orchestrator | 2025-05-05 00:56:33.076208 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-05 00:56:33.076213 | orchestrator | Monday 05 May 2025 00:56:25 +0000 (0:00:00.385) 0:12:35.919 ************ 2025-05-05 00:56:33.076218 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.076223 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.076228 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.076233 | orchestrator | 2025-05-05 00:56:33.076241 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-05 00:56:33.076246 | orchestrator | Monday 05 May 2025 00:56:27 +0000 (0:00:01.270) 0:12:37.189 ************ 2025-05-05 00:56:33.076250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:56:33.076255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:56:33.076260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:56:33.076265 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:56:33.076270 | orchestrator | 2025-05-05 00:56:33.076275 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-05 00:56:33.076280 | orchestrator | Monday 05 May 2025 00:56:27 +0000 (0:00:00.975) 0:12:38.164 ************ 2025-05-05 00:56:33.076285 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:56:33.076290 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:56:33.076295 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:56:33.076300 | orchestrator | 2025-05-05 00:56:33.076320 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-05 00:56:33.076325 | orchestrator | Monday 05 May 2025 00:56:28 +0000 (0:00:00.384) 0:12:38.549 ************ 2025-05-05 00:56:33.076331 | orchestrator | changed: [testbed-node-3] 2025-05-05 00:56:33.076336 | orchestrator | changed: [testbed-node-4] 2025-05-05 00:56:33.076341 | orchestrator | changed: [testbed-node-5] 2025-05-05 00:56:33.076345 | orchestrator | 2025-05-05 00:56:33.076350 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:56:33.076355 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-05 00:56:33.076361 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-05 00:56:33.076366 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-05 00:56:33.076371 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-05 00:56:33.076376 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-05 00:56:33.076380 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-05 00:56:33.076385 | orchestrator | 2025-05-05 00:56:33.076390 | orchestrator | 2025-05-05 00:56:33.076395 | orchestrator | 2025-05-05 00:56:33.076402 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:56:33.076407 | orchestrator | Monday 05 May 2025 00:56:29 +0000 (0:00:01.419) 0:12:39.969 ************ 2025-05-05 00:56:33.076412 | orchestrator | =============================================================================== 2025-05-05 00:56:33.076417 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 46.34s 2025-05-05 00:56:33.076422 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 38.45s 2025-05-05 00:56:33.076429 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 23.63s 2025-05-05 00:56:33.076434 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.45s 2025-05-05 00:56:33.076439 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.03s 2025-05-05 00:56:33.076444 | orchestrator | ceph-mon : fetch ceph initial keys ------------------------------------- 13.73s 2025-05-05 00:56:33.076449 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.57s 2025-05-05 00:56:33.076454 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.68s 2025-05-05 00:56:33.076459 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.34s 2025-05-05 00:56:33.076468 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.00s 2025-05-05 00:56:33.076473 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.18s 2025-05-05 00:56:33.076477 | orchestrator | ceph-config : create ceph initial directories --------------------------- 5.81s 2025-05-05 00:56:33.076482 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.14s 2025-05-05 00:56:33.076487 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.30s 2025-05-05 00:56:33.076492 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 4.15s 2025-05-05 00:56:33.076519 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.10s 2025-05-05 00:56:33.076525 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 3.96s 2025-05-05 00:56:33.076530 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.33s 2025-05-05 00:56:33.076545 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.31s 2025-05-05 00:56:33.076551 | orchestrator | ceph-facts : find a running mon container ------------------------------- 3.30s 2025-05-05 00:56:33.076556 | orchestrator | 2025-05-05 00:56:33.076561 | orchestrator | 2025-05-05 00:56:33.076565 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-05 00:56:33.076570 | orchestrator | 2025-05-05 00:56:33.076575 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-05 00:56:33.076580 | orchestrator | Monday 05 May 2025 00:53:08 +0000 (0:00:00.124) 0:00:00.124 ************ 2025-05-05 00:56:33.076585 | orchestrator | ok: [localhost] => { 2025-05-05 00:56:33.076590 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-05 00:56:33.076595 | orchestrator | } 2025-05-05 00:56:33.076600 | orchestrator | 2025-05-05 00:56:33.076605 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-05 00:56:33.076610 | orchestrator | Monday 05 May 2025 00:53:08 +0000 (0:00:00.043) 0:00:00.167 ************ 2025-05-05 00:56:33.076615 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-05 00:56:33.076620 | orchestrator | ...ignoring 2025-05-05 00:56:33.076625 | orchestrator | 2025-05-05 00:56:33.076630 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-05 00:56:33.076635 | orchestrator | Monday 05 May 2025 00:53:10 +0000 (0:00:02.510) 0:00:02.678 ************ 2025-05-05 00:56:33.076640 | orchestrator | skipping: [localhost] 2025-05-05 00:56:33.076645 | orchestrator | 2025-05-05 00:56:33.076650 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-05 00:56:33.076655 | orchestrator | Monday 05 May 2025 00:53:10 +0000 (0:00:00.060) 0:00:02.739 ************ 2025-05-05 00:56:33.076660 | orchestrator | ok: [localhost] 2025-05-05 00:56:33.076665 | orchestrator | 2025-05-05 00:56:33.076670 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:56:33.076674 | orchestrator | 2025-05-05 00:56:33.076679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:56:33.076685 | orchestrator | Monday 05 May 2025 00:53:10 +0000 (0:00:00.106) 0:00:02.845 ************ 2025-05-05 00:56:33.076690 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.076694 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.076699 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.076704 | orchestrator | 2025-05-05 00:56:33.076709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:56:33.076714 | orchestrator | Monday 05 May 2025 00:53:11 +0000 (0:00:00.264) 0:00:03.109 ************ 2025-05-05 00:56:33.076719 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-05 00:56:33.076731 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-05 00:56:33.076736 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-05 00:56:33.076745 | orchestrator | 2025-05-05 00:56:33.076750 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-05 00:56:33.076755 | orchestrator | 2025-05-05 00:56:33.076760 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-05 00:56:33.076765 | orchestrator | Monday 05 May 2025 00:53:11 +0000 (0:00:00.370) 0:00:03.479 ************ 2025-05-05 00:56:33.076770 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:56:33.076775 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-05 00:56:33.076780 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-05 00:56:33.076785 | orchestrator | 2025-05-05 00:56:33.076792 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-05 00:56:33.076797 | orchestrator | Monday 05 May 2025 00:53:11 +0000 (0:00:00.436) 0:00:03.915 ************ 2025-05-05 00:56:33.076803 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.076808 | orchestrator | 2025-05-05 00:56:33.076813 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-05 00:56:33.076818 | orchestrator | Monday 05 May 2025 00:53:12 +0000 (0:00:00.528) 0:00:04.444 ************ 2025-05-05 00:56:33.076825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.076835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.076844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.076850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.076857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.076866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.076871 | orchestrator | 2025-05-05 00:56:33.076876 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-05 00:56:33.076881 | orchestrator | Monday 05 May 2025 00:53:15 +0000 (0:00:03.502) 0:00:07.947 ************ 2025-05-05 00:56:33.076886 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.076891 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.076896 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.076901 | orchestrator | 2025-05-05 00:56:33.076906 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-05 00:56:33.076913 | orchestrator | Monday 05 May 2025 00:53:16 +0000 (0:00:00.697) 0:00:08.644 ************ 2025-05-05 00:56:33.076919 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.076926 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.076931 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.076936 | orchestrator | 2025-05-05 00:56:33.076941 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-05 00:56:33.076946 | orchestrator | Monday 05 May 2025 00:53:18 +0000 (0:00:01.357) 0:00:10.002 ************ 2025-05-05 00:56:33.076951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.076957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.076970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.076976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.076981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.076989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.076994 | orchestrator | 2025-05-05 00:56:33.076999 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-05 00:56:33.077004 | orchestrator | Monday 05 May 2025 00:53:23 +0000 (0:00:05.467) 0:00:15.470 ************ 2025-05-05 00:56:33.077009 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077014 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077019 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077024 | orchestrator | 2025-05-05 00:56:33.077029 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-05 00:56:33.077037 | orchestrator | Monday 05 May 2025 00:53:24 +0000 (0:00:01.070) 0:00:16.540 ************ 2025-05-05 00:56:33.077042 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077047 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.077052 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.077056 | orchestrator | 2025-05-05 00:56:33.077061 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-05 00:56:33.077066 | orchestrator | Monday 05 May 2025 00:53:32 +0000 (0:00:07.493) 0:00:24.033 ************ 2025-05-05 00:56:33.077072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.077081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.077093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-05 00:56:33.077099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.077107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.077113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-05 00:56:33.077118 | orchestrator | 2025-05-05 00:56:33.077123 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-05 00:56:33.077128 | orchestrator | Monday 05 May 2025 00:53:35 +0000 (0:00:03.735) 0:00:27.769 ************ 2025-05-05 00:56:33.077133 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077138 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.077143 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.077161 | orchestrator | 2025-05-05 00:56:33.077166 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-05 00:56:33.077171 | orchestrator | Monday 05 May 2025 00:53:36 +0000 (0:00:01.101) 0:00:28.871 ************ 2025-05-05 00:56:33.077176 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077181 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.077189 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.077194 | orchestrator | 2025-05-05 00:56:33.077199 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-05 00:56:33.077204 | orchestrator | Monday 05 May 2025 00:53:37 +0000 (0:00:00.356) 0:00:29.227 ************ 2025-05-05 00:56:33.077209 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077214 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.077219 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.077224 | orchestrator | 2025-05-05 00:56:33.077229 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-05 00:56:33.077234 | orchestrator | Monday 05 May 2025 00:53:37 +0000 (0:00:00.359) 0:00:29.586 ************ 2025-05-05 00:56:33.077239 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-05 00:56:33.077244 | orchestrator | ...ignoring 2025-05-05 00:56:33.077249 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-05 00:56:33.077254 | orchestrator | ...ignoring 2025-05-05 00:56:33.077259 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-05 00:56:33.077264 | orchestrator | ...ignoring 2025-05-05 00:56:33.077269 | orchestrator | 2025-05-05 00:56:33.077277 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-05 00:56:33.077282 | orchestrator | Monday 05 May 2025 00:53:48 +0000 (0:00:10.778) 0:00:40.365 ************ 2025-05-05 00:56:33.077287 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077292 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.077297 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.077302 | orchestrator | 2025-05-05 00:56:33.077318 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-05 00:56:33.077323 | orchestrator | Monday 05 May 2025 00:53:49 +0000 (0:00:00.619) 0:00:40.984 ************ 2025-05-05 00:56:33.077328 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.077333 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077338 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077343 | orchestrator | 2025-05-05 00:56:33.077350 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-05 00:56:33.077355 | orchestrator | Monday 05 May 2025 00:53:49 +0000 (0:00:00.555) 0:00:41.540 ************ 2025-05-05 00:56:33.077360 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.077365 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077370 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077375 | orchestrator | 2025-05-05 00:56:33.077380 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-05 00:56:33.077385 | orchestrator | Monday 05 May 2025 00:53:50 +0000 (0:00:00.517) 0:00:42.057 ************ 2025-05-05 00:56:33.077390 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.077395 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077400 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077405 | orchestrator | 2025-05-05 00:56:33.077410 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-05 00:56:33.077415 | orchestrator | Monday 05 May 2025 00:53:50 +0000 (0:00:00.600) 0:00:42.658 ************ 2025-05-05 00:56:33.077420 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077425 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.077430 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.077435 | orchestrator | 2025-05-05 00:56:33.077440 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-05 00:56:33.077445 | orchestrator | Monday 05 May 2025 00:53:51 +0000 (0:00:00.679) 0:00:43.338 ************ 2025-05-05 00:56:33.077450 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.077455 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077460 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077465 | orchestrator | 2025-05-05 00:56:33.077470 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-05 00:56:33.077475 | orchestrator | Monday 05 May 2025 00:53:51 +0000 (0:00:00.527) 0:00:43.866 ************ 2025-05-05 00:56:33.077480 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077485 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077489 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-05 00:56:33.077494 | orchestrator | 2025-05-05 00:56:33.077499 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-05 00:56:33.077504 | orchestrator | Monday 05 May 2025 00:53:52 +0000 (0:00:00.513) 0:00:44.379 ************ 2025-05-05 00:56:33.077509 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077514 | orchestrator | 2025-05-05 00:56:33.077519 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-05 00:56:33.077524 | orchestrator | Monday 05 May 2025 00:54:02 +0000 (0:00:10.307) 0:00:54.687 ************ 2025-05-05 00:56:33.077529 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077534 | orchestrator | 2025-05-05 00:56:33.077539 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-05 00:56:33.077544 | orchestrator | Monday 05 May 2025 00:54:02 +0000 (0:00:00.164) 0:00:54.852 ************ 2025-05-05 00:56:33.077549 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.077556 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077561 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077566 | orchestrator | 2025-05-05 00:56:33.077571 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-05 00:56:33.077576 | orchestrator | Monday 05 May 2025 00:54:04 +0000 (0:00:01.384) 0:00:56.236 ************ 2025-05-05 00:56:33.077581 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077586 | orchestrator | 2025-05-05 00:56:33.077591 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-05 00:56:33.077596 | orchestrator | Monday 05 May 2025 00:54:13 +0000 (0:00:09.127) 0:01:05.363 ************ 2025-05-05 00:56:33.077603 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-05 00:56:33.077609 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077614 | orchestrator | 2025-05-05 00:56:33.077619 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-05 00:56:33.077624 | orchestrator | Monday 05 May 2025 00:54:20 +0000 (0:00:07.176) 0:01:12.540 ************ 2025-05-05 00:56:33.077629 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077634 | orchestrator | 2025-05-05 00:56:33.077639 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-05 00:56:33.077644 | orchestrator | Monday 05 May 2025 00:54:23 +0000 (0:00:02.609) 0:01:15.149 ************ 2025-05-05 00:56:33.077649 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077654 | orchestrator | 2025-05-05 00:56:33.077659 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-05 00:56:33.077663 | orchestrator | Monday 05 May 2025 00:54:23 +0000 (0:00:00.125) 0:01:15.275 ************ 2025-05-05 00:56:33.077668 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.077673 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.077678 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.077683 | orchestrator | 2025-05-05 00:56:33.077688 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-05 00:56:33.077693 | orchestrator | Monday 05 May 2025 00:54:23 +0000 (0:00:00.451) 0:01:15.726 ************ 2025-05-05 00:56:33.077698 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.077703 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.077711 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.077716 | orchestrator | 2025-05-05 00:56:33.077720 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-05 00:56:33.077725 | orchestrator | Monday 05 May 2025 00:54:24 +0000 (0:00:00.527) 0:01:16.254 ************ 2025-05-05 00:56:33.077730 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-05 00:56:33.077735 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077740 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.077745 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.077750 | orchestrator | 2025-05-05 00:56:33.077758 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-05 00:56:33.077763 | orchestrator | skipping: no hosts matched 2025-05-05 00:56:33.077768 | orchestrator | 2025-05-05 00:56:33.077773 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-05 00:56:33.077778 | orchestrator | 2025-05-05 00:56:33.077783 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-05 00:56:33.077788 | orchestrator | Monday 05 May 2025 00:54:42 +0000 (0:00:18.440) 0:01:34.694 ************ 2025-05-05 00:56:33.077793 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:56:33.077798 | orchestrator | 2025-05-05 00:56:33.077803 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-05 00:56:33.077808 | orchestrator | Monday 05 May 2025 00:55:04 +0000 (0:00:22.076) 0:01:56.771 ************ 2025-05-05 00:56:33.077813 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.077818 | orchestrator | 2025-05-05 00:56:33.077823 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-05 00:56:33.077830 | orchestrator | Monday 05 May 2025 00:55:20 +0000 (0:00:15.596) 0:02:12.367 ************ 2025-05-05 00:56:33.077836 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.077841 | orchestrator | 2025-05-05 00:56:33.077846 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-05 00:56:33.077851 | orchestrator | 2025-05-05 00:56:33.077856 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-05 00:56:33.077861 | orchestrator | Monday 05 May 2025 00:55:23 +0000 (0:00:02.767) 0:02:15.135 ************ 2025-05-05 00:56:33.077866 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:56:33.077871 | orchestrator | 2025-05-05 00:56:33.077876 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-05 00:56:33.077881 | orchestrator | Monday 05 May 2025 00:55:36 +0000 (0:00:13.632) 0:02:28.768 ************ 2025-05-05 00:56:33.077886 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.077890 | orchestrator | 2025-05-05 00:56:33.077895 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-05 00:56:33.077900 | orchestrator | Monday 05 May 2025 00:55:57 +0000 (0:00:20.538) 0:02:49.306 ************ 2025-05-05 00:56:33.077905 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.077910 | orchestrator | 2025-05-05 00:56:33.077915 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-05 00:56:33.077920 | orchestrator | 2025-05-05 00:56:33.077925 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-05 00:56:33.077930 | orchestrator | Monday 05 May 2025 00:55:59 +0000 (0:00:02.509) 0:02:51.816 ************ 2025-05-05 00:56:33.077935 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.077940 | orchestrator | 2025-05-05 00:56:33.077945 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-05 00:56:33.077950 | orchestrator | Monday 05 May 2025 00:56:10 +0000 (0:00:10.188) 0:03:02.004 ************ 2025-05-05 00:56:33.077955 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077960 | orchestrator | 2025-05-05 00:56:33.077965 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-05 00:56:33.077969 | orchestrator | Monday 05 May 2025 00:56:14 +0000 (0:00:04.533) 0:03:06.538 ************ 2025-05-05 00:56:33.077978 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.077983 | orchestrator | 2025-05-05 00:56:33.077988 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-05 00:56:33.077993 | orchestrator | 2025-05-05 00:56:33.077998 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-05 00:56:33.078003 | orchestrator | Monday 05 May 2025 00:56:17 +0000 (0:00:02.583) 0:03:09.121 ************ 2025-05-05 00:56:33.078008 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:56:33.078031 | orchestrator | 2025-05-05 00:56:33.078037 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-05 00:56:33.078042 | orchestrator | Monday 05 May 2025 00:56:17 +0000 (0:00:00.732) 0:03:09.854 ************ 2025-05-05 00:56:33.078047 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.078052 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.078057 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.078062 | orchestrator | 2025-05-05 00:56:33.078069 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-05 00:56:33.078075 | orchestrator | Monday 05 May 2025 00:56:20 +0000 (0:00:02.556) 0:03:12.410 ************ 2025-05-05 00:56:33.078079 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.078084 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.078090 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.078094 | orchestrator | 2025-05-05 00:56:33.078099 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-05 00:56:33.078104 | orchestrator | Monday 05 May 2025 00:56:22 +0000 (0:00:02.156) 0:03:14.567 ************ 2025-05-05 00:56:33.078109 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.078114 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.078122 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.078127 | orchestrator | 2025-05-05 00:56:33.078135 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-05 00:56:33.078140 | orchestrator | Monday 05 May 2025 00:56:24 +0000 (0:00:02.278) 0:03:16.845 ************ 2025-05-05 00:56:33.078145 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.078150 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.078155 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:56:33.078160 | orchestrator | 2025-05-05 00:56:33.078164 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-05 00:56:33.078169 | orchestrator | Monday 05 May 2025 00:56:27 +0000 (0:00:02.137) 0:03:18.983 ************ 2025-05-05 00:56:33.078174 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:56:33.078179 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:56:33.078184 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:56:33.078189 | orchestrator | 2025-05-05 00:56:33.078194 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-05 00:56:33.078199 | orchestrator | Monday 05 May 2025 00:56:30 +0000 (0:00:03.779) 0:03:22.762 ************ 2025-05-05 00:56:33.078204 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:56:33.078209 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:56:33.078214 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:56:33.078219 | orchestrator | 2025-05-05 00:56:33.078224 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:56:33.078229 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-05 00:56:33.078234 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-05 00:56:33.078239 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-05 00:56:33.078244 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-05 00:56:33.078249 | orchestrator | 2025-05-05 00:56:33.078267 | orchestrator | 2025-05-05 00:56:33.078273 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:56:33.078278 | orchestrator | Monday 05 May 2025 00:56:31 +0000 (0:00:00.439) 0:03:23.202 ************ 2025-05-05 00:56:33.078283 | orchestrator | =============================================================================== 2025-05-05 00:56:33.078287 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.13s 2025-05-05 00:56:33.078292 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.71s 2025-05-05 00:56:33.078297 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 18.44s 2025-05-05 00:56:33.078302 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.78s 2025-05-05 00:56:33.078318 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.31s 2025-05-05 00:56:33.078324 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.19s 2025-05-05 00:56:33.078329 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.13s 2025-05-05 00:56:33.078334 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 7.49s 2025-05-05 00:56:33.078338 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.18s 2025-05-05 00:56:33.078343 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.47s 2025-05-05 00:56:33.078348 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.28s 2025-05-05 00:56:33.078353 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.53s 2025-05-05 00:56:33.078358 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.78s 2025-05-05 00:56:33.078366 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.74s 2025-05-05 00:56:33.078372 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.50s 2025-05-05 00:56:33.078376 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.61s 2025-05-05 00:56:33.078381 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.58s 2025-05-05 00:56:33.078386 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.56s 2025-05-05 00:56:33.078391 | orchestrator | Check MariaDB service --------------------------------------------------- 2.51s 2025-05-05 00:56:33.078396 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.28s 2025-05-05 00:56:33.078401 | orchestrator | 2025-05-05 00:56:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:33.078408 | orchestrator | 2025-05-05 00:56:33 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:36.102740 | orchestrator | 2025-05-05 00:56:33 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:36.102869 | orchestrator | 2025-05-05 00:56:33 | INFO  | Task 3224f3e6-ef87-4cef-925d-0645d48d9604 is in state SUCCESS 2025-05-05 00:56:36.102889 | orchestrator | 2025-05-05 00:56:33 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:36.102906 | orchestrator | 2025-05-05 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:36.102938 | orchestrator | 2025-05-05 00:56:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:36.105167 | orchestrator | 2025-05-05 00:56:36 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:36.105225 | orchestrator | 2025-05-05 00:56:36 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:39.147229 | orchestrator | 2025-05-05 00:56:36 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:39.147421 | orchestrator | 2025-05-05 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:39.147469 | orchestrator | 2025-05-05 00:56:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:39.148059 | orchestrator | 2025-05-05 00:56:39 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:39.148815 | orchestrator | 2025-05-05 00:56:39 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:39.150365 | orchestrator | 2025-05-05 00:56:39 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:42.199067 | orchestrator | 2025-05-05 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:42.199210 | orchestrator | 2025-05-05 00:56:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:42.199672 | orchestrator | 2025-05-05 00:56:42 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:42.199712 | orchestrator | 2025-05-05 00:56:42 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:42.200626 | orchestrator | 2025-05-05 00:56:42 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:42.200792 | orchestrator | 2025-05-05 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:45.258853 | orchestrator | 2025-05-05 00:56:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:45.259662 | orchestrator | 2025-05-05 00:56:45 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:45.259726 | orchestrator | 2025-05-05 00:56:45 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:45.260776 | orchestrator | 2025-05-05 00:56:45 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:48.310374 | orchestrator | 2025-05-05 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:48.310560 | orchestrator | 2025-05-05 00:56:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:48.311496 | orchestrator | 2025-05-05 00:56:48 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:48.312792 | orchestrator | 2025-05-05 00:56:48 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:48.313498 | orchestrator | 2025-05-05 00:56:48 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:48.313619 | orchestrator | 2025-05-05 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:51.373585 | orchestrator | 2025-05-05 00:56:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:51.377434 | orchestrator | 2025-05-05 00:56:51 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:51.377559 | orchestrator | 2025-05-05 00:56:51 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:51.377712 | orchestrator | 2025-05-05 00:56:51 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:54.428899 | orchestrator | 2025-05-05 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:54.429045 | orchestrator | 2025-05-05 00:56:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:54.429461 | orchestrator | 2025-05-05 00:56:54 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:54.430583 | orchestrator | 2025-05-05 00:56:54 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:54.432157 | orchestrator | 2025-05-05 00:56:54 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:56:57.478914 | orchestrator | 2025-05-05 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:56:57.479058 | orchestrator | 2025-05-05 00:56:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:56:57.479274 | orchestrator | 2025-05-05 00:56:57 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:56:57.480036 | orchestrator | 2025-05-05 00:56:57 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:56:57.480872 | orchestrator | 2025-05-05 00:56:57 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:00.531182 | orchestrator | 2025-05-05 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:00.531383 | orchestrator | 2025-05-05 00:57:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:00.534425 | orchestrator | 2025-05-05 00:57:00 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:00.534500 | orchestrator | 2025-05-05 00:57:00 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:00.536194 | orchestrator | 2025-05-05 00:57:00 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:00.538579 | orchestrator | 2025-05-05 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:03.581377 | orchestrator | 2025-05-05 00:57:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:03.582267 | orchestrator | 2025-05-05 00:57:03 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:03.587921 | orchestrator | 2025-05-05 00:57:03 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:06.613052 | orchestrator | 2025-05-05 00:57:03 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:06.613162 | orchestrator | 2025-05-05 00:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:06.613198 | orchestrator | 2025-05-05 00:57:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:06.614488 | orchestrator | 2025-05-05 00:57:06 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:06.616031 | orchestrator | 2025-05-05 00:57:06 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:06.619432 | orchestrator | 2025-05-05 00:57:06 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:09.654730 | orchestrator | 2025-05-05 00:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:09.654854 | orchestrator | 2025-05-05 00:57:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:09.655749 | orchestrator | 2025-05-05 00:57:09 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:09.657750 | orchestrator | 2025-05-05 00:57:09 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:09.658333 | orchestrator | 2025-05-05 00:57:09 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:12.687231 | orchestrator | 2025-05-05 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:12.687396 | orchestrator | 2025-05-05 00:57:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:12.688209 | orchestrator | 2025-05-05 00:57:12 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:12.688609 | orchestrator | 2025-05-05 00:57:12 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:12.689265 | orchestrator | 2025-05-05 00:57:12 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:15.732257 | orchestrator | 2025-05-05 00:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:15.732407 | orchestrator | 2025-05-05 00:57:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:15.733185 | orchestrator | 2025-05-05 00:57:15 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:15.733218 | orchestrator | 2025-05-05 00:57:15 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:15.733606 | orchestrator | 2025-05-05 00:57:15 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:15.733762 | orchestrator | 2025-05-05 00:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:18.783257 | orchestrator | 2025-05-05 00:57:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:18.784981 | orchestrator | 2025-05-05 00:57:18 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:18.787876 | orchestrator | 2025-05-05 00:57:18 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:18.790378 | orchestrator | 2025-05-05 00:57:18 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:21.846586 | orchestrator | 2025-05-05 00:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:21.846745 | orchestrator | 2025-05-05 00:57:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:21.849211 | orchestrator | 2025-05-05 00:57:21 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:21.851381 | orchestrator | 2025-05-05 00:57:21 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:21.853531 | orchestrator | 2025-05-05 00:57:21 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:24.904670 | orchestrator | 2025-05-05 00:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:24.904816 | orchestrator | 2025-05-05 00:57:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:24.910219 | orchestrator | 2025-05-05 00:57:24 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:24.910530 | orchestrator | 2025-05-05 00:57:24 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:24.910558 | orchestrator | 2025-05-05 00:57:24 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:24.910600 | orchestrator | 2025-05-05 00:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:27.954231 | orchestrator | 2025-05-05 00:57:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:27.955271 | orchestrator | 2025-05-05 00:57:27 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:27.957103 | orchestrator | 2025-05-05 00:57:27 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:27.960582 | orchestrator | 2025-05-05 00:57:27 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:31.031179 | orchestrator | 2025-05-05 00:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:31.031397 | orchestrator | 2025-05-05 00:57:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:31.032133 | orchestrator | 2025-05-05 00:57:31 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:31.034139 | orchestrator | 2025-05-05 00:57:31 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:31.036500 | orchestrator | 2025-05-05 00:57:31 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:31.036785 | orchestrator | 2025-05-05 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:34.088174 | orchestrator | 2025-05-05 00:57:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:34.090489 | orchestrator | 2025-05-05 00:57:34 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:34.092864 | orchestrator | 2025-05-05 00:57:34 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:34.095014 | orchestrator | 2025-05-05 00:57:34 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:37.142783 | orchestrator | 2025-05-05 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:37.142928 | orchestrator | 2025-05-05 00:57:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:37.144894 | orchestrator | 2025-05-05 00:57:37 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:37.147459 | orchestrator | 2025-05-05 00:57:37 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:37.149390 | orchestrator | 2025-05-05 00:57:37 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:40.211842 | orchestrator | 2025-05-05 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:40.211980 | orchestrator | 2025-05-05 00:57:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:40.214456 | orchestrator | 2025-05-05 00:57:40 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:40.216172 | orchestrator | 2025-05-05 00:57:40 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:40.217158 | orchestrator | 2025-05-05 00:57:40 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:43.269065 | orchestrator | 2025-05-05 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:43.269219 | orchestrator | 2025-05-05 00:57:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:43.270203 | orchestrator | 2025-05-05 00:57:43 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:43.272496 | orchestrator | 2025-05-05 00:57:43 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:43.274261 | orchestrator | 2025-05-05 00:57:43 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:46.324411 | orchestrator | 2025-05-05 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:46.324570 | orchestrator | 2025-05-05 00:57:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:46.326187 | orchestrator | 2025-05-05 00:57:46 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:46.329769 | orchestrator | 2025-05-05 00:57:46 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:46.331700 | orchestrator | 2025-05-05 00:57:46 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:49.368474 | orchestrator | 2025-05-05 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:49.368638 | orchestrator | 2025-05-05 00:57:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:49.369219 | orchestrator | 2025-05-05 00:57:49 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:49.371253 | orchestrator | 2025-05-05 00:57:49 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:49.372888 | orchestrator | 2025-05-05 00:57:49 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:52.412900 | orchestrator | 2025-05-05 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:52.413034 | orchestrator | 2025-05-05 00:57:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:52.414671 | orchestrator | 2025-05-05 00:57:52 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:52.417481 | orchestrator | 2025-05-05 00:57:52 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:52.419866 | orchestrator | 2025-05-05 00:57:52 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:55.465548 | orchestrator | 2025-05-05 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:55.465688 | orchestrator | 2025-05-05 00:57:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:55.467280 | orchestrator | 2025-05-05 00:57:55 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:55.469578 | orchestrator | 2025-05-05 00:57:55 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:55.471577 | orchestrator | 2025-05-05 00:57:55 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:57:55.471807 | orchestrator | 2025-05-05 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:57:58.528010 | orchestrator | 2025-05-05 00:57:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:57:58.529442 | orchestrator | 2025-05-05 00:57:58 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:57:58.529489 | orchestrator | 2025-05-05 00:57:58 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:57:58.529514 | orchestrator | 2025-05-05 00:57:58 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:58:01.580739 | orchestrator | 2025-05-05 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:01.580882 | orchestrator | 2025-05-05 00:58:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:01.582215 | orchestrator | 2025-05-05 00:58:01 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:01.584434 | orchestrator | 2025-05-05 00:58:01 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:01.586486 | orchestrator | 2025-05-05 00:58:01 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:58:04.634990 | orchestrator | 2025-05-05 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:04.635138 | orchestrator | 2025-05-05 00:58:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:04.636708 | orchestrator | 2025-05-05 00:58:04 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:04.638479 | orchestrator | 2025-05-05 00:58:04 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:04.640274 | orchestrator | 2025-05-05 00:58:04 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:58:07.685192 | orchestrator | 2025-05-05 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:07.685398 | orchestrator | 2025-05-05 00:58:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:07.687039 | orchestrator | 2025-05-05 00:58:07 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:07.688720 | orchestrator | 2025-05-05 00:58:07 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:07.690555 | orchestrator | 2025-05-05 00:58:07 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:58:10.739047 | orchestrator | 2025-05-05 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:10.739177 | orchestrator | 2025-05-05 00:58:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:10.740162 | orchestrator | 2025-05-05 00:58:10 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:10.741608 | orchestrator | 2025-05-05 00:58:10 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:10.742984 | orchestrator | 2025-05-05 00:58:10 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:58:13.806734 | orchestrator | 2025-05-05 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:13.806881 | orchestrator | 2025-05-05 00:58:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:13.807515 | orchestrator | 2025-05-05 00:58:13 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:13.807558 | orchestrator | 2025-05-05 00:58:13 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:13.809137 | orchestrator | 2025-05-05 00:58:13 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state STARTED 2025-05-05 00:58:13.809485 | orchestrator | 2025-05-05 00:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:16.858954 | orchestrator | 2025-05-05 00:58:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:16.860628 | orchestrator | 2025-05-05 00:58:16 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:16.862809 | orchestrator | 2025-05-05 00:58:16 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:16.864701 | orchestrator | 2025-05-05 00:58:16 | INFO  | Task 0d0ff4c2-0c62-42bc-ad9c-bfb4c1b2c109 is in state SUCCESS 2025-05-05 00:58:16.866684 | orchestrator | 2025-05-05 00:58:16.866954 | orchestrator | 2025-05-05 00:58:16.866979 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:58:16.866995 | orchestrator | 2025-05-05 00:58:16.867011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:58:16.867041 | orchestrator | Monday 05 May 2025 00:56:34 +0000 (0:00:00.365) 0:00:00.365 ************ 2025-05-05 00:58:16.867056 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.867072 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.867087 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.867101 | orchestrator | 2025-05-05 00:58:16.867116 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:58:16.867130 | orchestrator | Monday 05 May 2025 00:56:35 +0000 (0:00:00.467) 0:00:00.833 ************ 2025-05-05 00:58:16.867145 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-05 00:58:16.867159 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-05 00:58:16.867173 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-05 00:58:16.867188 | orchestrator | 2025-05-05 00:58:16.867202 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-05 00:58:16.867216 | orchestrator | 2025-05-05 00:58:16.867230 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-05 00:58:16.867244 | orchestrator | Monday 05 May 2025 00:56:35 +0000 (0:00:00.331) 0:00:01.164 ************ 2025-05-05 00:58:16.867259 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:58:16.867274 | orchestrator | 2025-05-05 00:58:16.867288 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-05 00:58:16.867302 | orchestrator | Monday 05 May 2025 00:56:36 +0000 (0:00:00.664) 0:00:01.828 ************ 2025-05-05 00:58:16.867322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.867432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.867451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.867476 | orchestrator | 2025-05-05 00:58:16.867491 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-05 00:58:16.867506 | orchestrator | Monday 05 May 2025 00:56:37 +0000 (0:00:01.749) 0:00:03.578 ************ 2025-05-05 00:58:16.867520 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.867535 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.867549 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.867563 | orchestrator | 2025-05-05 00:58:16.867578 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-05 00:58:16.867594 | orchestrator | Monday 05 May 2025 00:56:38 +0000 (0:00:00.289) 0:00:03.867 ************ 2025-05-05 00:58:16.867618 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-05 00:58:16.867635 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-05 00:58:16.867651 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-05 00:58:16.867666 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-05 00:58:16.867683 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-05 00:58:16.867698 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-05 00:58:16.867713 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-05 00:58:16.867728 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-05 00:58:16.867744 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-05 00:58:16.867760 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-05 00:58:16.867776 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-05 00:58:16.867791 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-05 00:58:16.867807 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-05 00:58:16.867823 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-05 00:58:16.867846 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-05 00:58:16.867862 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-05 00:58:16.867877 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-05 00:58:16.867892 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-05 00:58:16.867908 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-05 00:58:16.867930 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-05 00:58:16.867947 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-05 00:58:16.867963 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-05 00:58:16.867985 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-05 00:58:16.867999 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-05 00:58:16.868014 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-05 00:58:16.868028 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-05 00:58:16.868043 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-05 00:58:16.868057 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-05 00:58:16.868071 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-05 00:58:16.868086 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-05 00:58:16.868100 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-05 00:58:16.868114 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-05 00:58:16.868128 | orchestrator | 2025-05-05 00:58:16.868143 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.868157 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:01.004) 0:00:04.872 ************ 2025-05-05 00:58:16.868171 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.868185 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.868200 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.868214 | orchestrator | 2025-05-05 00:58:16.868228 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.868243 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:00.450) 0:00:05.322 ************ 2025-05-05 00:58:16.868257 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.868272 | orchestrator | 2025-05-05 00:58:16.868292 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.868307 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:00.119) 0:00:05.442 ************ 2025-05-05 00:58:16.868321 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.868356 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.868379 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.868393 | orchestrator | 2025-05-05 00:58:16.868407 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.868422 | orchestrator | Monday 05 May 2025 00:56:40 +0000 (0:00:00.434) 0:00:05.876 ************ 2025-05-05 00:58:16.868436 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.868450 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.868464 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.868478 | orchestrator | 2025-05-05 00:58:16.868493 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.868507 | orchestrator | Monday 05 May 2025 00:56:40 +0000 (0:00:00.336) 0:00:06.213 ************ 2025-05-05 00:58:16.868521 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.868535 | orchestrator | 2025-05-05 00:58:16.868550 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.868564 | orchestrator | Monday 05 May 2025 00:56:40 +0000 (0:00:00.303) 0:00:06.516 ************ 2025-05-05 00:58:16.868578 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.868592 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.868611 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.868626 | orchestrator | 2025-05-05 00:58:16.868640 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.868654 | orchestrator | Monday 05 May 2025 00:56:41 +0000 (0:00:00.538) 0:00:07.055 ************ 2025-05-05 00:58:16.868668 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.868682 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.868697 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.868711 | orchestrator | 2025-05-05 00:58:16.868725 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.868739 | orchestrator | Monday 05 May 2025 00:56:41 +0000 (0:00:00.537) 0:00:07.593 ************ 2025-05-05 00:58:16.868753 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.868767 | orchestrator | 2025-05-05 00:58:16.868781 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.868796 | orchestrator | Monday 05 May 2025 00:56:42 +0000 (0:00:00.132) 0:00:07.725 ************ 2025-05-05 00:58:16.868810 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.868824 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.868838 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.868852 | orchestrator | 2025-05-05 00:58:16.868866 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.868880 | orchestrator | Monday 05 May 2025 00:56:42 +0000 (0:00:00.444) 0:00:08.170 ************ 2025-05-05 00:58:16.868895 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.868909 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.868923 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.868937 | orchestrator | 2025-05-05 00:58:16.868951 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.868965 | orchestrator | Monday 05 May 2025 00:56:43 +0000 (0:00:00.512) 0:00:08.682 ************ 2025-05-05 00:58:16.868979 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.868993 | orchestrator | 2025-05-05 00:58:16.869007 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.869021 | orchestrator | Monday 05 May 2025 00:56:43 +0000 (0:00:00.112) 0:00:08.795 ************ 2025-05-05 00:58:16.869035 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869049 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.869064 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.869078 | orchestrator | 2025-05-05 00:58:16.869092 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.869107 | orchestrator | Monday 05 May 2025 00:56:43 +0000 (0:00:00.431) 0:00:09.226 ************ 2025-05-05 00:58:16.869121 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.869135 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.869150 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.869171 | orchestrator | 2025-05-05 00:58:16.869185 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.869199 | orchestrator | Monday 05 May 2025 00:56:43 +0000 (0:00:00.336) 0:00:09.563 ************ 2025-05-05 00:58:16.869213 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869227 | orchestrator | 2025-05-05 00:58:16.869242 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.869256 | orchestrator | Monday 05 May 2025 00:56:44 +0000 (0:00:00.322) 0:00:09.885 ************ 2025-05-05 00:58:16.869270 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869285 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.869299 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.869313 | orchestrator | 2025-05-05 00:58:16.869373 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.869391 | orchestrator | Monday 05 May 2025 00:56:44 +0000 (0:00:00.353) 0:00:10.239 ************ 2025-05-05 00:58:16.869405 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.869420 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.869434 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.869449 | orchestrator | 2025-05-05 00:58:16.869463 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.869477 | orchestrator | Monday 05 May 2025 00:56:45 +0000 (0:00:00.614) 0:00:10.853 ************ 2025-05-05 00:58:16.869491 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869505 | orchestrator | 2025-05-05 00:58:16.869519 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.869533 | orchestrator | Monday 05 May 2025 00:56:45 +0000 (0:00:00.130) 0:00:10.984 ************ 2025-05-05 00:58:16.869548 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869562 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.869576 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.869590 | orchestrator | 2025-05-05 00:58:16.869604 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.869619 | orchestrator | Monday 05 May 2025 00:56:45 +0000 (0:00:00.578) 0:00:11.563 ************ 2025-05-05 00:58:16.869639 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.869654 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.869668 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.869682 | orchestrator | 2025-05-05 00:58:16.869696 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.869710 | orchestrator | Monday 05 May 2025 00:56:46 +0000 (0:00:00.421) 0:00:11.984 ************ 2025-05-05 00:58:16.869724 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869738 | orchestrator | 2025-05-05 00:58:16.869753 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.869767 | orchestrator | Monday 05 May 2025 00:56:46 +0000 (0:00:00.120) 0:00:12.105 ************ 2025-05-05 00:58:16.869781 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869795 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.869809 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.869823 | orchestrator | 2025-05-05 00:58:16.869837 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.869851 | orchestrator | Monday 05 May 2025 00:56:46 +0000 (0:00:00.429) 0:00:12.534 ************ 2025-05-05 00:58:16.869865 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.869879 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.869893 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.869907 | orchestrator | 2025-05-05 00:58:16.869921 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.869935 | orchestrator | Monday 05 May 2025 00:56:47 +0000 (0:00:00.471) 0:00:13.005 ************ 2025-05-05 00:58:16.869949 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.869963 | orchestrator | 2025-05-05 00:58:16.869977 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.869999 | orchestrator | Monday 05 May 2025 00:56:47 +0000 (0:00:00.106) 0:00:13.112 ************ 2025-05-05 00:58:16.870013 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.870082 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.870098 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.870112 | orchestrator | 2025-05-05 00:58:16.870127 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.870141 | orchestrator | Monday 05 May 2025 00:56:47 +0000 (0:00:00.281) 0:00:13.393 ************ 2025-05-05 00:58:16.870155 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.870169 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.870183 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.870197 | orchestrator | 2025-05-05 00:58:16.870212 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.870226 | orchestrator | Monday 05 May 2025 00:56:48 +0000 (0:00:00.461) 0:00:13.854 ************ 2025-05-05 00:58:16.870240 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.870254 | orchestrator | 2025-05-05 00:58:16.870268 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.870282 | orchestrator | Monday 05 May 2025 00:56:48 +0000 (0:00:00.136) 0:00:13.991 ************ 2025-05-05 00:58:16.870296 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.870310 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.870324 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.870357 | orchestrator | 2025-05-05 00:58:16.870371 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.870385 | orchestrator | Monday 05 May 2025 00:56:48 +0000 (0:00:00.490) 0:00:14.482 ************ 2025-05-05 00:58:16.870399 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.870415 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.870438 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.870455 | orchestrator | 2025-05-05 00:58:16.870470 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.870484 | orchestrator | Monday 05 May 2025 00:56:49 +0000 (0:00:00.514) 0:00:14.997 ************ 2025-05-05 00:58:16.870499 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.870513 | orchestrator | 2025-05-05 00:58:16.870528 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.870542 | orchestrator | Monday 05 May 2025 00:56:49 +0000 (0:00:00.125) 0:00:15.123 ************ 2025-05-05 00:58:16.870556 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.870570 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.870585 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.870599 | orchestrator | 2025-05-05 00:58:16.870613 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-05 00:58:16.870628 | orchestrator | Monday 05 May 2025 00:56:49 +0000 (0:00:00.470) 0:00:15.593 ************ 2025-05-05 00:58:16.870662 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:58:16.870677 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:58:16.870691 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:58:16.870706 | orchestrator | 2025-05-05 00:58:16.870725 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-05 00:58:16.870739 | orchestrator | Monday 05 May 2025 00:56:50 +0000 (0:00:01.020) 0:00:16.614 ************ 2025-05-05 00:58:16.870754 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.870768 | orchestrator | 2025-05-05 00:58:16.870783 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-05 00:58:16.870797 | orchestrator | Monday 05 May 2025 00:56:51 +0000 (0:00:00.291) 0:00:16.906 ************ 2025-05-05 00:58:16.870811 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.870825 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.870840 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.870854 | orchestrator | 2025-05-05 00:58:16.870868 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-05 00:58:16.870883 | orchestrator | Monday 05 May 2025 00:56:52 +0000 (0:00:00.790) 0:00:17.696 ************ 2025-05-05 00:58:16.870905 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:58:16.870919 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:58:16.870934 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:58:16.870948 | orchestrator | 2025-05-05 00:58:16.870962 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-05 00:58:16.870976 | orchestrator | Monday 05 May 2025 00:56:55 +0000 (0:00:03.091) 0:00:20.788 ************ 2025-05-05 00:58:16.870991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-05 00:58:16.871012 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-05 00:58:16.871027 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-05 00:58:16.871041 | orchestrator | 2025-05-05 00:58:16.871055 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-05 00:58:16.871070 | orchestrator | Monday 05 May 2025 00:56:58 +0000 (0:00:03.192) 0:00:23.980 ************ 2025-05-05 00:58:16.871084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-05 00:58:16.871099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-05 00:58:16.871113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-05 00:58:16.871127 | orchestrator | 2025-05-05 00:58:16.871153 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-05 00:58:16.871168 | orchestrator | Monday 05 May 2025 00:57:02 +0000 (0:00:03.813) 0:00:27.793 ************ 2025-05-05 00:58:16.871182 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-05 00:58:16.871197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-05 00:58:16.871211 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-05 00:58:16.871225 | orchestrator | 2025-05-05 00:58:16.871239 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-05 00:58:16.871253 | orchestrator | Monday 05 May 2025 00:57:04 +0000 (0:00:02.784) 0:00:30.578 ************ 2025-05-05 00:58:16.871267 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.871281 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.871296 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.871310 | orchestrator | 2025-05-05 00:58:16.871324 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-05 00:58:16.871359 | orchestrator | Monday 05 May 2025 00:57:05 +0000 (0:00:00.264) 0:00:30.843 ************ 2025-05-05 00:58:16.871373 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.871388 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.871402 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.871416 | orchestrator | 2025-05-05 00:58:16.871430 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-05 00:58:16.871444 | orchestrator | Monday 05 May 2025 00:57:05 +0000 (0:00:00.337) 0:00:31.180 ************ 2025-05-05 00:58:16.871472 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:58:16.871487 | orchestrator | 2025-05-05 00:58:16.871501 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-05 00:58:16.871515 | orchestrator | Monday 05 May 2025 00:57:06 +0000 (0:00:00.542) 0:00:31.722 ************ 2025-05-05 00:58:16.871538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.871565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.871590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.871614 | orchestrator | 2025-05-05 00:58:16.871629 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-05 00:58:16.871644 | orchestrator | Monday 05 May 2025 00:57:07 +0000 (0:00:01.525) 0:00:33.248 ************ 2025-05-05 00:58:16.871659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:58:16.871681 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.871703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backen2025-05-05 00:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:16.871720 | orchestrator | d': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:58:16.871736 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.871752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:58:16.871774 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.871789 | orchestrator | 2025-05-05 00:58:16.871803 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-05 00:58:16.871817 | orchestrator | Monday 05 May 2025 00:57:08 +0000 (0:00:00.832) 0:00:34.081 ************ 2025-05-05 00:58:16.871841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:58:16.871858 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.871873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:58:16.871895 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.871919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-05 00:58:16.871943 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.871957 | orchestrator | 2025-05-05 00:58:16.871972 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-05 00:58:16.871986 | orchestrator | Monday 05 May 2025 00:57:09 +0000 (0:00:00.933) 0:00:35.014 ************ 2025-05-05 00:58:16.872008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.872024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.872053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-05 00:58:16.872070 | orchestrator | 2025-05-05 00:58:16.872085 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-05 00:58:16.872099 | orchestrator | Monday 05 May 2025 00:57:14 +0000 (0:00:05.533) 0:00:40.548 ************ 2025-05-05 00:58:16.872114 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:58:16.872128 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:58:16.872142 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:58:16.872157 | orchestrator | 2025-05-05 00:58:16.872171 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-05 00:58:16.872185 | orchestrator | Monday 05 May 2025 00:57:15 +0000 (0:00:00.331) 0:00:40.879 ************ 2025-05-05 00:58:16.872200 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:58:16.872214 | orchestrator | 2025-05-05 00:58:16.872228 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-05 00:58:16.872242 | orchestrator | Monday 05 May 2025 00:57:15 +0000 (0:00:00.480) 0:00:41.360 ************ 2025-05-05 00:58:16.872257 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:58:16.872271 | orchestrator | 2025-05-05 00:58:16.872291 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-05 00:58:16.872312 | orchestrator | Monday 05 May 2025 00:57:17 +0000 (0:00:02.305) 0:00:43.665 ************ 2025-05-05 00:58:16.872326 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:58:16.872395 | orchestrator | 2025-05-05 00:58:16.872409 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-05 00:58:16.872423 | orchestrator | Monday 05 May 2025 00:57:20 +0000 (0:00:02.163) 0:00:45.829 ************ 2025-05-05 00:58:16.872438 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:58:16.872452 | orchestrator | 2025-05-05 00:58:16.872466 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-05 00:58:16.872481 | orchestrator | Monday 05 May 2025 00:57:34 +0000 (0:00:14.222) 0:01:00.051 ************ 2025-05-05 00:58:16.872495 | orchestrator | 2025-05-05 00:58:16.872509 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-05 00:58:16.872523 | orchestrator | Monday 05 May 2025 00:57:34 +0000 (0:00:00.058) 0:01:00.110 ************ 2025-05-05 00:58:16.872538 | orchestrator | 2025-05-05 00:58:16.872552 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-05 00:58:16.872566 | orchestrator | Monday 05 May 2025 00:57:34 +0000 (0:00:00.175) 0:01:00.285 ************ 2025-05-05 00:58:16.872579 | orchestrator | 2025-05-05 00:58:16.872591 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-05 00:58:16.872604 | orchestrator | Monday 05 May 2025 00:57:34 +0000 (0:00:00.062) 0:01:00.348 ************ 2025-05-05 00:58:16.872616 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:58:16.872629 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:58:16.872642 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:58:16.872655 | orchestrator | 2025-05-05 00:58:16.872667 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:58:16.872680 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-05 00:58:16.872693 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-05 00:58:16.872706 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-05 00:58:16.872719 | orchestrator | 2025-05-05 00:58:16.872731 | orchestrator | 2025-05-05 00:58:16.872743 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:58:16.872756 | orchestrator | Monday 05 May 2025 00:58:15 +0000 (0:00:40.421) 0:01:40.769 ************ 2025-05-05 00:58:16.872768 | orchestrator | =============================================================================== 2025-05-05 00:58:16.872781 | orchestrator | horizon : Restart horizon container ------------------------------------ 40.42s 2025-05-05 00:58:16.872794 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.22s 2025-05-05 00:58:16.872806 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.53s 2025-05-05 00:58:16.872819 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.81s 2025-05-05 00:58:16.872831 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.19s 2025-05-05 00:58:16.872844 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.09s 2025-05-05 00:58:16.872856 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.78s 2025-05-05 00:58:16.872869 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.31s 2025-05-05 00:58:16.872881 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.16s 2025-05-05 00:58:16.872894 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.75s 2025-05-05 00:58:16.872906 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.53s 2025-05-05 00:58:16.872919 | orchestrator | horizon : Update policy file name --------------------------------------- 1.02s 2025-05-05 00:58:16.872944 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.00s 2025-05-05 00:58:19.911801 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.93s 2025-05-05 00:58:19.911929 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.83s 2025-05-05 00:58:19.911968 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.79s 2025-05-05 00:58:19.911984 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-05-05 00:58:19.911999 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2025-05-05 00:58:19.912014 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2025-05-05 00:58:19.912028 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-05-05 00:58:19.912059 | orchestrator | 2025-05-05 00:58:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:19.913418 | orchestrator | 2025-05-05 00:58:19 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:19.915886 | orchestrator | 2025-05-05 00:58:19 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:22.956724 | orchestrator | 2025-05-05 00:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:22.956865 | orchestrator | 2025-05-05 00:58:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:22.957017 | orchestrator | 2025-05-05 00:58:22 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:22.957837 | orchestrator | 2025-05-05 00:58:22 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:26.017574 | orchestrator | 2025-05-05 00:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:26.017727 | orchestrator | 2025-05-05 00:58:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:26.018584 | orchestrator | 2025-05-05 00:58:26 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:26.020265 | orchestrator | 2025-05-05 00:58:26 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:29.062510 | orchestrator | 2025-05-05 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:29.062668 | orchestrator | 2025-05-05 00:58:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:29.064528 | orchestrator | 2025-05-05 00:58:29 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:29.067662 | orchestrator | 2025-05-05 00:58:29 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:32.113537 | orchestrator | 2025-05-05 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:32.113697 | orchestrator | 2025-05-05 00:58:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:32.114646 | orchestrator | 2025-05-05 00:58:32 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:32.116001 | orchestrator | 2025-05-05 00:58:32 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:32.116288 | orchestrator | 2025-05-05 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:35.160643 | orchestrator | 2025-05-05 00:58:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:35.161803 | orchestrator | 2025-05-05 00:58:35 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:35.163423 | orchestrator | 2025-05-05 00:58:35 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:38.211910 | orchestrator | 2025-05-05 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:38.212059 | orchestrator | 2025-05-05 00:58:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:38.213546 | orchestrator | 2025-05-05 00:58:38 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:38.214245 | orchestrator | 2025-05-05 00:58:38 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:41.262607 | orchestrator | 2025-05-05 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:41.262748 | orchestrator | 2025-05-05 00:58:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:41.263910 | orchestrator | 2025-05-05 00:58:41 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:41.265487 | orchestrator | 2025-05-05 00:58:41 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:41.265639 | orchestrator | 2025-05-05 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:44.313244 | orchestrator | 2025-05-05 00:58:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:44.315948 | orchestrator | 2025-05-05 00:58:44 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state STARTED 2025-05-05 00:58:44.320376 | orchestrator | 2025-05-05 00:58:44 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:47.370233 | orchestrator | 2025-05-05 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:47.370433 | orchestrator | 2025-05-05 00:58:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:47.372188 | orchestrator | 2025-05-05 00:58:47 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:58:47.374298 | orchestrator | 2025-05-05 00:58:47 | INFO  | Task 7c13cb3f-3d72-40e5-ac4b-4e1dd36647cd is in state SUCCESS 2025-05-05 00:58:47.377393 | orchestrator | 2025-05-05 00:58:47.377492 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-05 00:58:47.377513 | orchestrator | 2025-05-05 00:58:47.377530 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-05 00:58:47.377545 | orchestrator | 2025-05-05 00:58:47.377560 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-05 00:58:47.377575 | orchestrator | Monday 05 May 2025 00:56:34 +0000 (0:00:01.207) 0:00:01.207 ************ 2025-05-05 00:58:47.377591 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:58:47.377606 | orchestrator | 2025-05-05 00:58:47.377621 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-05 00:58:47.377636 | orchestrator | Monday 05 May 2025 00:56:35 +0000 (0:00:00.563) 0:00:01.770 ************ 2025-05-05 00:58:47.377651 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-05 00:58:47.377666 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-05 00:58:47.377681 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-05 00:58:47.377695 | orchestrator | 2025-05-05 00:58:47.377709 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-05 00:58:47.377724 | orchestrator | Monday 05 May 2025 00:56:36 +0000 (0:00:00.817) 0:00:02.588 ************ 2025-05-05 00:58:47.377738 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:58:47.377754 | orchestrator | 2025-05-05 00:58:47.377795 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-05 00:58:47.377810 | orchestrator | Monday 05 May 2025 00:56:36 +0000 (0:00:00.697) 0:00:03.286 ************ 2025-05-05 00:58:47.377825 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.377840 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.377854 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.377868 | orchestrator | 2025-05-05 00:58:47.377883 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-05 00:58:47.377897 | orchestrator | Monday 05 May 2025 00:56:37 +0000 (0:00:00.686) 0:00:03.972 ************ 2025-05-05 00:58:47.377914 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.377931 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.377947 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.377963 | orchestrator | 2025-05-05 00:58:47.377979 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-05 00:58:47.377995 | orchestrator | Monday 05 May 2025 00:56:38 +0000 (0:00:00.345) 0:00:04.318 ************ 2025-05-05 00:58:47.378011 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.378081 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.378098 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.378114 | orchestrator | 2025-05-05 00:58:47.378129 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-05 00:58:47.378146 | orchestrator | Monday 05 May 2025 00:56:38 +0000 (0:00:00.886) 0:00:05.204 ************ 2025-05-05 00:58:47.378161 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.378176 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.378190 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.378204 | orchestrator | 2025-05-05 00:58:47.378218 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-05 00:58:47.378232 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:00.306) 0:00:05.511 ************ 2025-05-05 00:58:47.378245 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.378259 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.378288 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.378302 | orchestrator | 2025-05-05 00:58:47.378317 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-05 00:58:47.378331 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:00.324) 0:00:05.835 ************ 2025-05-05 00:58:47.378365 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.378380 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.378394 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.378408 | orchestrator | 2025-05-05 00:58:47.378422 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-05 00:58:47.378437 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:00.331) 0:00:06.166 ************ 2025-05-05 00:58:47.378452 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.378467 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.378481 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.378495 | orchestrator | 2025-05-05 00:58:47.378510 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-05 00:58:47.378524 | orchestrator | Monday 05 May 2025 00:56:40 +0000 (0:00:00.536) 0:00:06.703 ************ 2025-05-05 00:58:47.378538 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.378552 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.378566 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.378580 | orchestrator | 2025-05-05 00:58:47.378594 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-05 00:58:47.378608 | orchestrator | Monday 05 May 2025 00:56:40 +0000 (0:00:00.296) 0:00:06.999 ************ 2025-05-05 00:58:47.378622 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-05 00:58:47.378641 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:58:47.378655 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:58:47.378670 | orchestrator | 2025-05-05 00:58:47.378693 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-05 00:58:47.378707 | orchestrator | Monday 05 May 2025 00:56:41 +0000 (0:00:00.730) 0:00:07.729 ************ 2025-05-05 00:58:47.378721 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.378735 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.378749 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.378763 | orchestrator | 2025-05-05 00:58:47.378777 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-05 00:58:47.378792 | orchestrator | Monday 05 May 2025 00:56:41 +0000 (0:00:00.540) 0:00:08.270 ************ 2025-05-05 00:58:47.378819 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-05 00:58:47.378834 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:58:47.378848 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:58:47.378862 | orchestrator | 2025-05-05 00:58:47.378876 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-05 00:58:47.378890 | orchestrator | Monday 05 May 2025 00:56:44 +0000 (0:00:02.418) 0:00:10.688 ************ 2025-05-05 00:58:47.378904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:58:47.378919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:58:47.378933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:58:47.378947 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.378961 | orchestrator | 2025-05-05 00:58:47.378975 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-05 00:58:47.378989 | orchestrator | Monday 05 May 2025 00:56:44 +0000 (0:00:00.528) 0:00:11.217 ************ 2025-05-05 00:58:47.379004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-05 00:58:47.379021 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-05 00:58:47.379036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-05 00:58:47.379050 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.379064 | orchestrator | 2025-05-05 00:58:47.379079 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-05 00:58:47.379093 | orchestrator | Monday 05 May 2025 00:56:45 +0000 (0:00:00.749) 0:00:11.967 ************ 2025-05-05 00:58:47.379108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:58:47.379124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:58:47.379139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:58:47.379160 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.379174 | orchestrator | 2025-05-05 00:58:47.379188 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-05 00:58:47.379202 | orchestrator | Monday 05 May 2025 00:56:45 +0000 (0:00:00.177) 0:00:12.144 ************ 2025-05-05 00:58:47.379219 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '40b51300a323', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-05 00:56:42.821606', 'end': '2025-05-05 00:56:42.871871', 'delta': '0:00:00.050265', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['40b51300a323'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-05 00:58:47.379248 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '3359a2970920', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-05 00:56:43.462776', 'end': '2025-05-05 00:56:43.504660', 'delta': '0:00:00.041884', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3359a2970920'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-05 00:58:47.379265 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a163903501d3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-05 00:56:44.001870', 'end': '2025-05-05 00:56:44.051758', 'delta': '0:00:00.049888', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a163903501d3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-05 00:58:47.379280 | orchestrator | 2025-05-05 00:58:47.379294 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-05 00:58:47.379309 | orchestrator | Monday 05 May 2025 00:56:46 +0000 (0:00:00.219) 0:00:12.363 ************ 2025-05-05 00:58:47.379323 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.379397 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.379413 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.379428 | orchestrator | 2025-05-05 00:58:47.379442 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-05 00:58:47.379457 | orchestrator | Monday 05 May 2025 00:56:46 +0000 (0:00:00.504) 0:00:12.868 ************ 2025-05-05 00:58:47.379471 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-05 00:58:47.379485 | orchestrator | 2025-05-05 00:58:47.379499 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-05 00:58:47.379513 | orchestrator | Monday 05 May 2025 00:56:48 +0000 (0:00:01.452) 0:00:14.320 ************ 2025-05-05 00:58:47.379528 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.379542 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.379556 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.379571 | orchestrator | 2025-05-05 00:58:47.379594 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-05 00:58:47.379608 | orchestrator | Monday 05 May 2025 00:56:48 +0000 (0:00:00.494) 0:00:14.814 ************ 2025-05-05 00:58:47.379622 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.379636 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.379650 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.379664 | orchestrator | 2025-05-05 00:58:47.379678 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-05 00:58:47.379692 | orchestrator | Monday 05 May 2025 00:56:48 +0000 (0:00:00.442) 0:00:15.257 ************ 2025-05-05 00:58:47.379705 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.379720 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.379734 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.379748 | orchestrator | 2025-05-05 00:58:47.379762 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-05 00:58:47.379776 | orchestrator | Monday 05 May 2025 00:56:49 +0000 (0:00:00.305) 0:00:15.562 ************ 2025-05-05 00:58:47.379791 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.379805 | orchestrator | 2025-05-05 00:58:47.379819 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-05 00:58:47.379833 | orchestrator | Monday 05 May 2025 00:56:49 +0000 (0:00:00.150) 0:00:15.713 ************ 2025-05-05 00:58:47.379847 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.379861 | orchestrator | 2025-05-05 00:58:47.379875 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-05 00:58:47.379895 | orchestrator | Monday 05 May 2025 00:56:49 +0000 (0:00:00.240) 0:00:15.953 ************ 2025-05-05 00:58:47.379909 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.379923 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.379937 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.379951 | orchestrator | 2025-05-05 00:58:47.379965 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-05 00:58:47.379979 | orchestrator | Monday 05 May 2025 00:56:50 +0000 (0:00:00.587) 0:00:16.541 ************ 2025-05-05 00:58:47.379993 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.380007 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.380021 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.380035 | orchestrator | 2025-05-05 00:58:47.380049 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-05 00:58:47.380063 | orchestrator | Monday 05 May 2025 00:56:50 +0000 (0:00:00.452) 0:00:16.993 ************ 2025-05-05 00:58:47.380077 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.380091 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.380106 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.380119 | orchestrator | 2025-05-05 00:58:47.380133 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-05 00:58:47.380147 | orchestrator | Monday 05 May 2025 00:56:51 +0000 (0:00:00.331) 0:00:17.324 ************ 2025-05-05 00:58:47.380161 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.380175 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.380196 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.380211 | orchestrator | 2025-05-05 00:58:47.380225 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-05 00:58:47.380239 | orchestrator | Monday 05 May 2025 00:56:51 +0000 (0:00:00.502) 0:00:17.827 ************ 2025-05-05 00:58:47.380253 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.380267 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.380281 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.380295 | orchestrator | 2025-05-05 00:58:47.380309 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-05 00:58:47.380323 | orchestrator | Monday 05 May 2025 00:56:52 +0000 (0:00:00.636) 0:00:18.464 ************ 2025-05-05 00:58:47.380354 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.380376 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.380390 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.380405 | orchestrator | 2025-05-05 00:58:47.380419 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-05 00:58:47.380433 | orchestrator | Monday 05 May 2025 00:56:52 +0000 (0:00:00.385) 0:00:18.849 ************ 2025-05-05 00:58:47.380446 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.380460 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.380474 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.380493 | orchestrator | 2025-05-05 00:58:47.380508 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-05 00:58:47.380522 | orchestrator | Monday 05 May 2025 00:56:52 +0000 (0:00:00.363) 0:00:19.213 ************ 2025-05-05 00:58:47.380537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b45d62aa--c8ca--51ec--bff2--6c96656db621-osd--block--b45d62aa--c8ca--51ec--bff2--6c96656db621', 'dm-uuid-LVM-hnmgOczfBJunDr1vwEvWDejbUNuXDIdyFAgOBK6ZbyjK5dwz2J33ScNK1h9SrZgs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac6a629e--412f--52b8--abc2--7f30e47159be-osd--block--ac6a629e--412f--52b8--abc2--7f30e47159be', 'dm-uuid-LVM-vQsQ946lcJ2Gx4z82zLL3f8f7WZpY02FQ74UNF8fMdtuEc7kQmbe8B7IY1X70JwQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f-osd--block--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f', 'dm-uuid-LVM-klE3QE9qijUUyOsOiKdGx1JXqX2wl0UDOeidRx9ZMXtG3iUc6PlvnvMVxAew4ir4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dbbf782--cf90--597f--b1d9--d891fd7b35f3-osd--block--1dbbf782--cf90--597f--b1d9--d891fd7b35f3', 'dm-uuid-LVM-xw3KxQooL3tY7dpsf9NDBb8HRiuK20YIhVAffn2UJNeESfvGpN0EZwPNMbuP1Xhi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_e63a3641-9ab8-401e-ae51-b6341150c0e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.380771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b45d62aa--c8ca--51ec--bff2--6c96656db621-osd--block--b45d62aa--c8ca--51ec--bff2--6c96656db621'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SXdFIF-7MUr-cpGo-XIOC-Axp8-F4Td-E7diUm', 'scsi-0QEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7', 'scsi-SQEMU_QEMU_HARDDISK_42838bfa-cc1b-4702-98d9-e28ebdac68d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.380788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ac6a629e--412f--52b8--abc2--7f30e47159be-osd--block--ac6a629e--412f--52b8--abc2--7f30e47159be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NWZ3aZ-1A0u-tZ37-0K7f-NDWE-eBlJ-Uoe6Pz', 'scsi-0QEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6', 'scsi-SQEMU_QEMU_HARDDISK_2486f75a-e60a-48fd-8d37-a608e25639e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.380817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8', 'scsi-SQEMU_QEMU_HARDDISK_6233d7b1-dcc8-4e9e-9ddc-c6ed1dc9bbe8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.380861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.380877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19ded391--41bb--58c4--acef--51f998367f5e-osd--block--19ded391--41bb--58c4--acef--51f998367f5e', 'dm-uuid-LVM-iP9n6Su8uagSXZCmHykXetfNCMzJp85hXNsiqszFZRXMAFHTd69p76ijZC3CBpGQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380906 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.380921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e-osd--block--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e', 'dm-uuid-LVM-QcoMKqkYKnqoYLB7gRNJ5H919jNE74oCUkSZbjnDQns5OA7mOeS3YBStbwOsjCDz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.380984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part1', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part14', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part15', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part16', 'scsi-SQEMU_QEMU_HARDDISK_303ddda9-04c8-4db7-a324-20b01373288b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:58:47.381186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f-osd--block--09f6cbbb--bab3--56dc--a9fe--f7e4ce5d119f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eeBbD7-r2vB-eYNZ-f0Pz-OaS4-Agz1-pbcdzj', 'scsi-0QEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164', 'scsi-SQEMU_QEMU_HARDDISK_781aa17e-e7c9-4602-9f68-f5aa193f4164'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3969d65f-a534-4e1c-b0b2-b40e2f22590e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1dbbf782--cf90--597f--b1d9--d891fd7b35f3-osd--block--1dbbf782--cf90--597f--b1d9--d891fd7b35f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ao8opF-U5PX-1ZHV-JYce-Qej7-j4Ty-xArusv', 'scsi-0QEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170', 'scsi-SQEMU_QEMU_HARDDISK_4d0bf700-f9e0-49dc-ac25-e14623495170'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--19ded391--41bb--58c4--acef--51f998367f5e-osd--block--19ded391--41bb--58c4--acef--51f998367f5e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xYYKKT-4p1b-FemN-hazU-fS5q-TY3A-e9eZXz', 'scsi-0QEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370', 'scsi-SQEMU_QEMU_HARDDISK_af745260-1df8-42ba-a894-c5ed39f05370'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e-osd--block--5b3e4e2d--95bb--5d7e--b29f--9e0b9408011e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-riweGy-1qvw-WXU0-xyl9-7Pb2-SY2q-iapc2L', 'scsi-0QEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10', 'scsi-SQEMU_QEMU_HARDDISK_42a6e7e5-8ee1-4531-a79c-d61afd2d8a10'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e', 'scsi-SQEMU_QEMU_HARDDISK_d858a9fc-f161-4032-83a3-99286d7d6b6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d', 'scsi-SQEMU_QEMU_HARDDISK_615e20fc-a585-4d17-960f-58a126b0377d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381403 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.381418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:58:47.381433 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.381447 | orchestrator | 2025-05-05 00:58:47.381460 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-05 00:58:47.381473 | orchestrator | Monday 05 May 2025 00:56:53 +0000 (0:00:00.680) 0:00:19.894 ************ 2025-05-05 00:58:47.381486 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-05 00:58:47.381498 | orchestrator | 2025-05-05 00:58:47.381511 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-05 00:58:47.381524 | orchestrator | Monday 05 May 2025 00:56:55 +0000 (0:00:01.579) 0:00:21.474 ************ 2025-05-05 00:58:47.381536 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.381549 | orchestrator | 2025-05-05 00:58:47.381561 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-05 00:58:47.381574 | orchestrator | Monday 05 May 2025 00:56:55 +0000 (0:00:00.174) 0:00:21.648 ************ 2025-05-05 00:58:47.381586 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.381599 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.381612 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.381624 | orchestrator | 2025-05-05 00:58:47.381637 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-05 00:58:47.381649 | orchestrator | Monday 05 May 2025 00:56:55 +0000 (0:00:00.553) 0:00:22.201 ************ 2025-05-05 00:58:47.381662 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.381674 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.381687 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.381699 | orchestrator | 2025-05-05 00:58:47.381712 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-05 00:58:47.381724 | orchestrator | Monday 05 May 2025 00:56:56 +0000 (0:00:00.694) 0:00:22.896 ************ 2025-05-05 00:58:47.381737 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.381749 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.381762 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.381774 | orchestrator | 2025-05-05 00:58:47.381787 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-05 00:58:47.381807 | orchestrator | Monday 05 May 2025 00:56:56 +0000 (0:00:00.379) 0:00:23.275 ************ 2025-05-05 00:58:47.381819 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.381832 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.381844 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.381857 | orchestrator | 2025-05-05 00:58:47.381869 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-05 00:58:47.381882 | orchestrator | Monday 05 May 2025 00:56:57 +0000 (0:00:00.952) 0:00:24.228 ************ 2025-05-05 00:58:47.381894 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.381907 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.381920 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.381932 | orchestrator | 2025-05-05 00:58:47.381945 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-05 00:58:47.381957 | orchestrator | Monday 05 May 2025 00:56:58 +0000 (0:00:00.319) 0:00:24.548 ************ 2025-05-05 00:58:47.381970 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.381983 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.381995 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.382008 | orchestrator | 2025-05-05 00:58:47.382046 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-05 00:58:47.382067 | orchestrator | Monday 05 May 2025 00:56:58 +0000 (0:00:00.563) 0:00:25.112 ************ 2025-05-05 00:58:47.382080 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.382092 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.382105 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.382117 | orchestrator | 2025-05-05 00:58:47.382130 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-05 00:58:47.382142 | orchestrator | Monday 05 May 2025 00:56:59 +0000 (0:00:00.391) 0:00:25.504 ************ 2025-05-05 00:58:47.382155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:58:47.382167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:58:47.382180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:58:47.382192 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.382205 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:58:47.382221 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:58:47.382234 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:58:47.382246 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:58:47.382258 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.382271 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:58:47.382283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:58:47.382295 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.382308 | orchestrator | 2025-05-05 00:58:47.382321 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-05 00:58:47.382353 | orchestrator | Monday 05 May 2025 00:57:00 +0000 (0:00:01.254) 0:00:26.758 ************ 2025-05-05 00:58:47.382367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:58:47.382380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:58:47.382392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:58:47.382405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:58:47.382417 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.382430 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:58:47.382442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:58:47.382454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:58:47.382467 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:58:47.382479 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.382499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:58:47.382511 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.382524 | orchestrator | 2025-05-05 00:58:47.382536 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-05 00:58:47.382549 | orchestrator | Monday 05 May 2025 00:57:01 +0000 (0:00:00.844) 0:00:27.602 ************ 2025-05-05 00:58:47.382561 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-05 00:58:47.382574 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-05 00:58:47.382586 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-05 00:58:47.382599 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-05 00:58:47.382611 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-05 00:58:47.382624 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-05 00:58:47.382636 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-05 00:58:47.382649 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-05 00:58:47.382661 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-05 00:58:47.382673 | orchestrator | 2025-05-05 00:58:47.382686 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-05 00:58:47.382699 | orchestrator | Monday 05 May 2025 00:57:03 +0000 (0:00:02.247) 0:00:29.850 ************ 2025-05-05 00:58:47.382711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:58:47.382723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:58:47.382736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:58:47.382748 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.382761 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:58:47.382773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:58:47.382785 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:58:47.382798 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.382811 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:58:47.382823 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:58:47.382835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:58:47.382848 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.382860 | orchestrator | 2025-05-05 00:58:47.382873 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-05 00:58:47.382885 | orchestrator | Monday 05 May 2025 00:57:04 +0000 (0:00:00.519) 0:00:30.369 ************ 2025-05-05 00:58:47.382898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-05 00:58:47.382910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-05 00:58:47.382923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-05 00:58:47.382935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-05 00:58:47.383131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-05 00:58:47.383149 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.383161 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-05 00:58:47.383174 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.383187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-05 00:58:47.383199 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-05 00:58:47.383212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-05 00:58:47.383225 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.383238 | orchestrator | 2025-05-05 00:58:47.383250 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-05 00:58:47.383263 | orchestrator | Monday 05 May 2025 00:57:04 +0000 (0:00:00.320) 0:00:30.689 ************ 2025-05-05 00:58:47.383276 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:58:47.383295 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:58:47.383308 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:58:47.383321 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:58:47.383348 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:58:47.383362 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:58:47.383375 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.383388 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.383401 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-05 00:58:47.383420 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:58:47.383433 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:58:47.383446 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.383458 | orchestrator | 2025-05-05 00:58:47.383471 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-05 00:58:47.383484 | orchestrator | Monday 05 May 2025 00:57:04 +0000 (0:00:00.318) 0:00:31.008 ************ 2025-05-05 00:58:47.383496 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 00:58:47.383509 | orchestrator | 2025-05-05 00:58:47.383521 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-05 00:58:47.383534 | orchestrator | Monday 05 May 2025 00:57:05 +0000 (0:00:00.506) 0:00:31.514 ************ 2025-05-05 00:58:47.383546 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.383559 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.383572 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.383584 | orchestrator | 2025-05-05 00:58:47.383597 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-05 00:58:47.383609 | orchestrator | Monday 05 May 2025 00:57:05 +0000 (0:00:00.264) 0:00:31.779 ************ 2025-05-05 00:58:47.383621 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.383634 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.383646 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.383658 | orchestrator | 2025-05-05 00:58:47.383671 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-05 00:58:47.383683 | orchestrator | Monday 05 May 2025 00:57:05 +0000 (0:00:00.301) 0:00:32.081 ************ 2025-05-05 00:58:47.383696 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.383708 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.383721 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.383733 | orchestrator | 2025-05-05 00:58:47.383745 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-05 00:58:47.383758 | orchestrator | Monday 05 May 2025 00:57:06 +0000 (0:00:00.293) 0:00:32.374 ************ 2025-05-05 00:58:47.383770 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.383783 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.383800 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.383813 | orchestrator | 2025-05-05 00:58:47.383825 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-05 00:58:47.383838 | orchestrator | Monday 05 May 2025 00:57:06 +0000 (0:00:00.563) 0:00:32.938 ************ 2025-05-05 00:58:47.383851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:58:47.383863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:58:47.383875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:58:47.383894 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.383907 | orchestrator | 2025-05-05 00:58:47.383919 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-05 00:58:47.383932 | orchestrator | Monday 05 May 2025 00:57:06 +0000 (0:00:00.321) 0:00:33.259 ************ 2025-05-05 00:58:47.383944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:58:47.383956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:58:47.383973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:58:47.383985 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.383998 | orchestrator | 2025-05-05 00:58:47.384011 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-05 00:58:47.384023 | orchestrator | Monday 05 May 2025 00:57:07 +0000 (0:00:00.349) 0:00:33.608 ************ 2025-05-05 00:58:47.384035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:58:47.384048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:58:47.384060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:58:47.384072 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.384085 | orchestrator | 2025-05-05 00:58:47.384097 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:58:47.384110 | orchestrator | Monday 05 May 2025 00:57:07 +0000 (0:00:00.290) 0:00:33.899 ************ 2025-05-05 00:58:47.384122 | orchestrator | ok: [testbed-node-3] 2025-05-05 00:58:47.384134 | orchestrator | ok: [testbed-node-4] 2025-05-05 00:58:47.384147 | orchestrator | ok: [testbed-node-5] 2025-05-05 00:58:47.384159 | orchestrator | 2025-05-05 00:58:47.384172 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-05 00:58:47.384188 | orchestrator | Monday 05 May 2025 00:57:07 +0000 (0:00:00.297) 0:00:34.196 ************ 2025-05-05 00:58:47.384201 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-05 00:58:47.384213 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-05 00:58:47.384226 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-05 00:58:47.384238 | orchestrator | 2025-05-05 00:58:47.384251 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-05 00:58:47.384263 | orchestrator | Monday 05 May 2025 00:57:08 +0000 (0:00:00.811) 0:00:35.008 ************ 2025-05-05 00:58:47.384275 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.384288 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.384300 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.384312 | orchestrator | 2025-05-05 00:58:47.384325 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-05 00:58:47.384384 | orchestrator | Monday 05 May 2025 00:57:08 +0000 (0:00:00.282) 0:00:35.291 ************ 2025-05-05 00:58:47.384399 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.384412 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.384426 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.384438 | orchestrator | 2025-05-05 00:58:47.384451 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-05 00:58:47.384469 | orchestrator | Monday 05 May 2025 00:57:09 +0000 (0:00:00.299) 0:00:35.590 ************ 2025-05-05 00:58:47.384483 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-05 00:58:47.384495 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.384508 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-05 00:58:47.384521 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.384534 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-05 00:58:47.384546 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.384559 | orchestrator | 2025-05-05 00:58:47.384572 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-05 00:58:47.384584 | orchestrator | Monday 05 May 2025 00:57:09 +0000 (0:00:00.349) 0:00:35.939 ************ 2025-05-05 00:58:47.384597 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-05 00:58:47.384621 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.384634 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-05 00:58:47.384644 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.384655 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-05 00:58:47.384665 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.384676 | orchestrator | 2025-05-05 00:58:47.384686 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-05 00:58:47.384697 | orchestrator | Monday 05 May 2025 00:57:10 +0000 (0:00:00.425) 0:00:36.365 ************ 2025-05-05 00:58:47.384707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-05 00:58:47.384718 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-05 00:58:47.384728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-05 00:58:47.384738 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-05 00:58:47.384748 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-05 00:58:47.384758 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-05 00:58:47.384768 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.384778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-05 00:58:47.384789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-05 00:58:47.384799 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.384810 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-05 00:58:47.384821 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.384831 | orchestrator | 2025-05-05 00:58:47.384841 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-05 00:58:47.384852 | orchestrator | Monday 05 May 2025 00:57:10 +0000 (0:00:00.663) 0:00:37.028 ************ 2025-05-05 00:58:47.384862 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.384873 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.384883 | orchestrator | skipping: [testbed-node-5] 2025-05-05 00:58:47.384893 | orchestrator | 2025-05-05 00:58:47.384903 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-05 00:58:47.384914 | orchestrator | Monday 05 May 2025 00:57:10 +0000 (0:00:00.212) 0:00:37.241 ************ 2025-05-05 00:58:47.384925 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-05 00:58:47.384935 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:58:47.384946 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:58:47.384956 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-05 00:58:47.384966 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-05 00:58:47.384976 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-05 00:58:47.384987 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-05 00:58:47.384997 | orchestrator | 2025-05-05 00:58:47.385007 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-05 00:58:47.385017 | orchestrator | Monday 05 May 2025 00:57:11 +0000 (0:00:00.780) 0:00:38.021 ************ 2025-05-05 00:58:47.385027 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-05 00:58:47.385038 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:58:47.385048 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:58:47.385058 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-05 00:58:47.385073 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-05 00:58:47.385083 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-05 00:58:47.385094 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-05 00:58:47.385104 | orchestrator | 2025-05-05 00:58:47.385114 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-05 00:58:47.385124 | orchestrator | Monday 05 May 2025 00:57:13 +0000 (0:00:01.542) 0:00:39.563 ************ 2025-05-05 00:58:47.385134 | orchestrator | skipping: [testbed-node-3] 2025-05-05 00:58:47.385145 | orchestrator | skipping: [testbed-node-4] 2025-05-05 00:58:47.385155 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-05 00:58:47.385165 | orchestrator | 2025-05-05 00:58:47.385176 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-05 00:58:47.385194 | orchestrator | Monday 05 May 2025 00:57:13 +0000 (0:00:00.555) 0:00:40.119 ************ 2025-05-05 00:58:47.385206 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-05 00:58:47.385219 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-05 00:58:47.385230 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-05 00:58:47.385240 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-05 00:58:47.385251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-05 00:58:47.385261 | orchestrator | 2025-05-05 00:58:47.385271 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-05 00:58:47.385282 | orchestrator | Monday 05 May 2025 00:57:55 +0000 (0:00:42.171) 0:01:22.291 ************ 2025-05-05 00:58:47.385292 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385302 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385312 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385322 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385332 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385359 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385369 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-05 00:58:47.385380 | orchestrator | 2025-05-05 00:58:47.385390 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-05 00:58:47.385400 | orchestrator | Monday 05 May 2025 00:58:16 +0000 (0:00:20.460) 0:01:42.751 ************ 2025-05-05 00:58:47.385410 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385425 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385436 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385446 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385460 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385470 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385480 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-05 00:58:47.385490 | orchestrator | 2025-05-05 00:58:47.385500 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-05 00:58:47.385511 | orchestrator | Monday 05 May 2025 00:58:26 +0000 (0:00:09.864) 0:01:52.616 ************ 2025-05-05 00:58:47.385521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385531 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-05 00:58:47.385541 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-05 00:58:47.385551 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385561 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-05 00:58:47.385571 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-05 00:58:47.385582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385592 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-05 00:58:47.385602 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-05 00:58:47.385612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:47.385622 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-05 00:58:47.385637 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-05 00:58:50.425223 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:50.425427 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-05 00:58:50.425453 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-05 00:58:50.425469 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-05 00:58:50.425484 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-05 00:58:50.425498 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-05 00:58:50.425513 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-05 00:58:50.425528 | orchestrator | 2025-05-05 00:58:50.425543 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:58:50.425561 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-05 00:58:50.425577 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-05 00:58:50.425591 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-05 00:58:50.425606 | orchestrator | 2025-05-05 00:58:50.425620 | orchestrator | 2025-05-05 00:58:50.425635 | orchestrator | 2025-05-05 00:58:50.425649 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:58:50.425663 | orchestrator | Monday 05 May 2025 00:58:44 +0000 (0:00:18.053) 0:02:10.669 ************ 2025-05-05 00:58:50.425703 | orchestrator | =============================================================================== 2025-05-05 00:58:50.425718 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.17s 2025-05-05 00:58:50.425732 | orchestrator | generate keys ---------------------------------------------------------- 20.46s 2025-05-05 00:58:50.425748 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.05s 2025-05-05 00:58:50.425786 | orchestrator | get keys from monitors -------------------------------------------------- 9.86s 2025-05-05 00:58:50.425803 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.42s 2025-05-05 00:58:50.425818 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.25s 2025-05-05 00:58:50.425833 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.58s 2025-05-05 00:58:50.425847 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.54s 2025-05-05 00:58:50.425862 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.45s 2025-05-05 00:58:50.425876 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.25s 2025-05-05 00:58:50.425890 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.95s 2025-05-05 00:58:50.425904 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.89s 2025-05-05 00:58:50.425919 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.84s 2025-05-05 00:58:50.425933 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.82s 2025-05-05 00:58:50.425947 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.81s 2025-05-05 00:58:50.425961 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.78s 2025-05-05 00:58:50.425976 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.75s 2025-05-05 00:58:50.425990 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-05-05 00:58:50.426004 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.70s 2025-05-05 00:58:50.426079 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.69s 2025-05-05 00:58:50.426097 | orchestrator | 2025-05-05 00:58:47 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:50.426112 | orchestrator | 2025-05-05 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:50.426144 | orchestrator | 2025-05-05 00:58:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:50.427822 | orchestrator | 2025-05-05 00:58:50 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:58:53.478975 | orchestrator | 2025-05-05 00:58:50 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:53.479101 | orchestrator | 2025-05-05 00:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:53.479138 | orchestrator | 2025-05-05 00:58:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:53.480027 | orchestrator | 2025-05-05 00:58:53 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:58:53.482722 | orchestrator | 2025-05-05 00:58:53 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:56.542992 | orchestrator | 2025-05-05 00:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:56.543141 | orchestrator | 2025-05-05 00:58:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:56.543866 | orchestrator | 2025-05-05 00:58:56 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:58:56.545609 | orchestrator | 2025-05-05 00:58:56 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:58:56.546818 | orchestrator | 2025-05-05 00:58:56 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:58:59.587981 | orchestrator | 2025-05-05 00:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:58:59.588123 | orchestrator | 2025-05-05 00:58:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:58:59.589572 | orchestrator | 2025-05-05 00:58:59 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:58:59.590851 | orchestrator | 2025-05-05 00:58:59 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:58:59.592254 | orchestrator | 2025-05-05 00:58:59 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:59:02.637832 | orchestrator | 2025-05-05 00:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:02.637980 | orchestrator | 2025-05-05 00:59:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:02.639470 | orchestrator | 2025-05-05 00:59:02 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:02.641172 | orchestrator | 2025-05-05 00:59:02 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:59:02.642904 | orchestrator | 2025-05-05 00:59:02 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:59:05.701868 | orchestrator | 2025-05-05 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:05.702008 | orchestrator | 2025-05-05 00:59:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:05.702786 | orchestrator | 2025-05-05 00:59:05 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:05.704246 | orchestrator | 2025-05-05 00:59:05 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:59:05.705994 | orchestrator | 2025-05-05 00:59:05 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:59:08.754010 | orchestrator | 2025-05-05 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:08.754209 | orchestrator | 2025-05-05 00:59:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:08.755128 | orchestrator | 2025-05-05 00:59:08 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:08.756477 | orchestrator | 2025-05-05 00:59:08 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:59:08.757425 | orchestrator | 2025-05-05 00:59:08 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state STARTED 2025-05-05 00:59:11.805954 | orchestrator | 2025-05-05 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:11.806200 | orchestrator | 2025-05-05 00:59:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:11.807255 | orchestrator | 2025-05-05 00:59:11 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:11.808198 | orchestrator | 2025-05-05 00:59:11 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:59:11.810196 | orchestrator | 2025-05-05 00:59:11 | INFO  | Task 4c2ecffa-0842-4fb9-bd86-955147c45e71 is in state SUCCESS 2025-05-05 00:59:11.812009 | orchestrator | 2025-05-05 00:59:11.812073 | orchestrator | 2025-05-05 00:59:11.812090 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 00:59:11.812105 | orchestrator | 2025-05-05 00:59:11.812120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 00:59:11.812691 | orchestrator | Monday 05 May 2025 00:56:34 +0000 (0:00:00.332) 0:00:00.332 ************ 2025-05-05 00:59:11.812716 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:11.812733 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:59:11.812748 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:59:11.812762 | orchestrator | 2025-05-05 00:59:11.812789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 00:59:11.812804 | orchestrator | Monday 05 May 2025 00:56:35 +0000 (0:00:00.486) 0:00:00.818 ************ 2025-05-05 00:59:11.812819 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-05 00:59:11.812834 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-05 00:59:11.812848 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-05 00:59:11.812862 | orchestrator | 2025-05-05 00:59:11.812876 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-05 00:59:11.812891 | orchestrator | 2025-05-05 00:59:11.812905 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-05 00:59:11.812919 | orchestrator | Monday 05 May 2025 00:56:35 +0000 (0:00:00.328) 0:00:01.147 ************ 2025-05-05 00:59:11.812933 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:59:11.812948 | orchestrator | 2025-05-05 00:59:11.812962 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-05 00:59:11.812983 | orchestrator | Monday 05 May 2025 00:56:36 +0000 (0:00:00.849) 0:00:01.997 ************ 2025-05-05 00:59:11.813012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.813068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.813149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.813183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813287 | orchestrator | 2025-05-05 00:59:11.813304 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-05 00:59:11.813325 | orchestrator | Monday 05 May 2025 00:56:38 +0000 (0:00:02.344) 0:00:04.342 ************ 2025-05-05 00:59:11.813367 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-05 00:59:11.813384 | orchestrator | 2025-05-05 00:59:11.813400 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-05 00:59:11.813415 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:00.561) 0:00:04.903 ************ 2025-05-05 00:59:11.813431 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:11.813447 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:59:11.813464 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:59:11.813479 | orchestrator | 2025-05-05 00:59:11.813494 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-05 00:59:11.813510 | orchestrator | Monday 05 May 2025 00:56:39 +0000 (0:00:00.470) 0:00:05.374 ************ 2025-05-05 00:59:11.813525 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 00:59:11.813542 | orchestrator | 2025-05-05 00:59:11.813557 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-05 00:59:11.813573 | orchestrator | Monday 05 May 2025 00:56:40 +0000 (0:00:00.391) 0:00:05.765 ************ 2025-05-05 00:59:11.813589 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:59:11.813604 | orchestrator | 2025-05-05 00:59:11.813619 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-05 00:59:11.813632 | orchestrator | Monday 05 May 2025 00:56:40 +0000 (0:00:00.688) 0:00:06.453 ************ 2025-05-05 00:59:11.813647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.813663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.813697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.813714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.813810 | orchestrator | 2025-05-05 00:59:11.813825 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-05 00:59:11.813839 | orchestrator | Monday 05 May 2025 00:56:44 +0000 (0:00:03.528) 0:00:09.982 ************ 2025-05-05 00:59:11.813862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:59:11.813879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.813893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:59:11.813908 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.813922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:59:11.813944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.813967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:59:11.813983 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.813998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:59:11.814013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.814106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:59:11.814145 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.814163 | orchestrator | 2025-05-05 00:59:11.814177 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-05 00:59:11.814192 | orchestrator | Monday 05 May 2025 00:56:45 +0000 (0:00:00.959) 0:00:10.942 ************ 2025-05-05 00:59:11.814208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:59:11.814234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.814250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:59:11.814265 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.814280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:59:11.814305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.814320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:59:11.814335 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.814402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-05 00:59:11.814419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.814434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-05 00:59:11.814456 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.814471 | orchestrator | 2025-05-05 00:59:11.814486 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-05 00:59:11.814500 | orchestrator | Monday 05 May 2025 00:56:46 +0000 (0:00:01.307) 0:00:12.249 ************ 2025-05-05 00:59:11.814515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.814532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.814554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.814570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814675 | orchestrator | 2025-05-05 00:59:11.814689 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-05 00:59:11.814704 | orchestrator | Monday 05 May 2025 00:56:49 +0000 (0:00:03.372) 0:00:15.621 ************ 2025-05-05 00:59:11.814719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.814741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.814757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.814772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.814794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.814810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.814832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.814876 | orchestrator | 2025-05-05 00:59:11.814890 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-05 00:59:11.814905 | orchestrator | Monday 05 May 2025 00:56:58 +0000 (0:00:08.281) 0:00:23.903 ************ 2025-05-05 00:59:11.814919 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.814933 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:59:11.814947 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:59:11.814961 | orchestrator | 2025-05-05 00:59:11.814975 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-05 00:59:11.814989 | orchestrator | Monday 05 May 2025 00:57:01 +0000 (0:00:03.166) 0:00:27.069 ************ 2025-05-05 00:59:11.815003 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.815017 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.815032 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.815045 | orchestrator | 2025-05-05 00:59:11.815068 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-05 00:59:11.815093 | orchestrator | Monday 05 May 2025 00:57:03 +0000 (0:00:01.748) 0:00:28.818 ************ 2025-05-05 00:59:11.815119 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.815143 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.815167 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.815199 | orchestrator | 2025-05-05 00:59:11.815219 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-05 00:59:11.815234 | orchestrator | Monday 05 May 2025 00:57:03 +0000 (0:00:00.599) 0:00:29.418 ************ 2025-05-05 00:59:11.815257 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.815272 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.815287 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.815301 | orchestrator | 2025-05-05 00:59:11.815315 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-05 00:59:11.815329 | orchestrator | Monday 05 May 2025 00:57:04 +0000 (0:00:00.324) 0:00:29.742 ************ 2025-05-05 00:59:11.815411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.815431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.815447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.815462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.815487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.815512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-05 00:59:11.815527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.815542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.815557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.815572 | orchestrator | 2025-05-05 00:59:11.815586 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-05 00:59:11.815601 | orchestrator | Monday 05 May 2025 00:57:06 +0000 (0:00:02.477) 0:00:32.220 ************ 2025-05-05 00:59:11.815615 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.815629 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.815644 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.815658 | orchestrator | 2025-05-05 00:59:11.815672 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-05 00:59:11.815686 | orchestrator | Monday 05 May 2025 00:57:06 +0000 (0:00:00.218) 0:00:32.438 ************ 2025-05-05 00:59:11.815707 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-05 00:59:11.815722 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-05 00:59:11.815741 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-05 00:59:11.815756 | orchestrator | 2025-05-05 00:59:11.815770 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-05 00:59:11.815785 | orchestrator | Monday 05 May 2025 00:57:08 +0000 (0:00:01.845) 0:00:34.284 ************ 2025-05-05 00:59:11.815799 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 00:59:11.815813 | orchestrator | 2025-05-05 00:59:11.815827 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-05 00:59:11.815841 | orchestrator | Monday 05 May 2025 00:57:09 +0000 (0:00:00.652) 0:00:34.937 ************ 2025-05-05 00:59:11.815855 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.815869 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.815883 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.815897 | orchestrator | 2025-05-05 00:59:11.815911 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-05 00:59:11.815925 | orchestrator | Monday 05 May 2025 00:57:10 +0000 (0:00:01.445) 0:00:36.382 ************ 2025-05-05 00:59:11.815937 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 00:59:11.815950 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-05 00:59:11.815963 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-05 00:59:11.815975 | orchestrator | 2025-05-05 00:59:11.815988 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-05 00:59:11.816000 | orchestrator | Monday 05 May 2025 00:57:11 +0000 (0:00:01.221) 0:00:37.604 ************ 2025-05-05 00:59:11.816013 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:11.816025 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:59:11.816038 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:59:11.816050 | orchestrator | 2025-05-05 00:59:11.816063 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-05 00:59:11.816076 | orchestrator | Monday 05 May 2025 00:57:12 +0000 (0:00:00.298) 0:00:37.903 ************ 2025-05-05 00:59:11.816088 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-05 00:59:11.816105 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-05 00:59:11.816127 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-05 00:59:11.816148 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-05 00:59:11.816171 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-05 00:59:11.816200 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-05 00:59:11.816216 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-05 00:59:11.816229 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-05 00:59:11.816241 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-05 00:59:11.816254 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-05 00:59:11.816266 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-05 00:59:11.816279 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-05 00:59:11.816291 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-05 00:59:11.816311 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-05 00:59:11.816324 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-05 00:59:11.816355 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-05 00:59:11.816373 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-05 00:59:11.816386 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-05 00:59:11.816399 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-05 00:59:11.816411 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-05 00:59:11.816424 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-05 00:59:11.816436 | orchestrator | 2025-05-05 00:59:11.816449 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-05 00:59:11.816462 | orchestrator | Monday 05 May 2025 00:57:22 +0000 (0:00:10.452) 0:00:48.355 ************ 2025-05-05 00:59:11.816474 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-05 00:59:11.816486 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-05 00:59:11.816499 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-05 00:59:11.816512 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-05 00:59:11.816525 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-05 00:59:11.816543 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-05 00:59:11.816557 | orchestrator | 2025-05-05 00:59:11.816574 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-05 00:59:11.816587 | orchestrator | Monday 05 May 2025 00:57:25 +0000 (0:00:03.168) 0:00:51.523 ************ 2025-05-05 00:59:11.816600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.816614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.816634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-05 00:59:11.816649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.816669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.816682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-05 00:59:11.816696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.816709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.816731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-05 00:59:11.816745 | orchestrator | 2025-05-05 00:59:11.816758 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-05 00:59:11.816770 | orchestrator | Monday 05 May 2025 00:57:28 +0000 (0:00:02.801) 0:00:54.325 ************ 2025-05-05 00:59:11.816783 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.816796 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.816808 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.816821 | orchestrator | 2025-05-05 00:59:11.816833 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-05 00:59:11.816846 | orchestrator | Monday 05 May 2025 00:57:28 +0000 (0:00:00.297) 0:00:54.622 ************ 2025-05-05 00:59:11.816858 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.816871 | orchestrator | 2025-05-05 00:59:11.816884 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-05 00:59:11.816896 | orchestrator | Monday 05 May 2025 00:57:31 +0000 (0:00:02.485) 0:00:57.107 ************ 2025-05-05 00:59:11.816909 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.816921 | orchestrator | 2025-05-05 00:59:11.816934 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-05 00:59:11.816947 | orchestrator | Monday 05 May 2025 00:57:33 +0000 (0:00:02.242) 0:00:59.349 ************ 2025-05-05 00:59:11.816959 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:11.816972 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:59:11.816984 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:59:11.816997 | orchestrator | 2025-05-05 00:59:11.817009 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-05 00:59:11.817022 | orchestrator | Monday 05 May 2025 00:57:34 +0000 (0:00:00.886) 0:01:00.236 ************ 2025-05-05 00:59:11.817034 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:11.817051 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:59:11.817065 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:59:11.817077 | orchestrator | 2025-05-05 00:59:11.817090 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-05 00:59:11.817102 | orchestrator | Monday 05 May 2025 00:57:34 +0000 (0:00:00.319) 0:01:00.555 ************ 2025-05-05 00:59:11.817115 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.817127 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:11.817141 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:11.817163 | orchestrator | 2025-05-05 00:59:11.817184 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-05 00:59:11.817207 | orchestrator | Monday 05 May 2025 00:57:35 +0000 (0:00:00.673) 0:01:01.229 ************ 2025-05-05 00:59:11.817228 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.817243 | orchestrator | 2025-05-05 00:59:11.817255 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-05 00:59:11.817268 | orchestrator | Monday 05 May 2025 00:57:48 +0000 (0:00:12.735) 0:01:13.964 ************ 2025-05-05 00:59:11.817287 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.817300 | orchestrator | 2025-05-05 00:59:11.817312 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-05 00:59:11.817325 | orchestrator | Monday 05 May 2025 00:57:57 +0000 (0:00:08.835) 0:01:22.799 ************ 2025-05-05 00:59:11.817360 | orchestrator | 2025-05-05 00:59:11.817375 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-05 00:59:11.817388 | orchestrator | Monday 05 May 2025 00:57:57 +0000 (0:00:00.054) 0:01:22.854 ************ 2025-05-05 00:59:11.817400 | orchestrator | 2025-05-05 00:59:11.817413 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-05 00:59:11.817444 | orchestrator | Monday 05 May 2025 00:57:57 +0000 (0:00:00.051) 0:01:22.906 ************ 2025-05-05 00:59:11.817469 | orchestrator | 2025-05-05 00:59:11.817483 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-05 00:59:11.817496 | orchestrator | Monday 05 May 2025 00:57:57 +0000 (0:00:00.053) 0:01:22.959 ************ 2025-05-05 00:59:11.817509 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.817521 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:59:11.817534 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:59:11.817547 | orchestrator | 2025-05-05 00:59:11.817560 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-05 00:59:11.817572 | orchestrator | Monday 05 May 2025 00:58:12 +0000 (0:00:15.520) 0:01:38.479 ************ 2025-05-05 00:59:11.817585 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.817598 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:59:11.817610 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:59:11.817623 | orchestrator | 2025-05-05 00:59:11.817635 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-05 00:59:11.817647 | orchestrator | Monday 05 May 2025 00:58:22 +0000 (0:00:09.777) 0:01:48.256 ************ 2025-05-05 00:59:11.817660 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.817672 | orchestrator | changed: [testbed-node-2] 2025-05-05 00:59:11.817685 | orchestrator | changed: [testbed-node-1] 2025-05-05 00:59:11.817698 | orchestrator | 2025-05-05 00:59:11.817710 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-05 00:59:11.817723 | orchestrator | Monday 05 May 2025 00:58:28 +0000 (0:00:05.528) 0:01:53.784 ************ 2025-05-05 00:59:11.817736 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 00:59:11.817748 | orchestrator | 2025-05-05 00:59:11.817766 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-05 00:59:11.817779 | orchestrator | Monday 05 May 2025 00:58:28 +0000 (0:00:00.786) 0:01:54.571 ************ 2025-05-05 00:59:11.817791 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:11.817804 | orchestrator | ok: [testbed-node-1] 2025-05-05 00:59:11.817817 | orchestrator | ok: [testbed-node-2] 2025-05-05 00:59:11.817829 | orchestrator | 2025-05-05 00:59:11.817842 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-05 00:59:11.817854 | orchestrator | Monday 05 May 2025 00:58:29 +0000 (0:00:01.013) 0:01:55.584 ************ 2025-05-05 00:59:11.817878 | orchestrator | changed: [testbed-node-0] 2025-05-05 00:59:11.817892 | orchestrator | 2025-05-05 00:59:11.817904 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-05 00:59:11.817917 | orchestrator | Monday 05 May 2025 00:58:31 +0000 (0:00:01.489) 0:01:57.073 ************ 2025-05-05 00:59:11.817929 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-05 00:59:11.817942 | orchestrator | 2025-05-05 00:59:11.817955 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-05 00:59:11.817967 | orchestrator | Monday 05 May 2025 00:58:40 +0000 (0:00:08.806) 0:02:05.880 ************ 2025-05-05 00:59:11.817980 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-05 00:59:11.817992 | orchestrator | 2025-05-05 00:59:11.818011 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-05 00:59:11.818053 | orchestrator | Monday 05 May 2025 00:58:59 +0000 (0:00:19.509) 0:02:25.390 ************ 2025-05-05 00:59:11.818067 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-05 00:59:11.818079 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-05 00:59:11.818092 | orchestrator | 2025-05-05 00:59:11.818104 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-05 00:59:11.818117 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:07.022) 0:02:32.413 ************ 2025-05-05 00:59:11.818129 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.818142 | orchestrator | 2025-05-05 00:59:11.818155 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-05 00:59:11.818167 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:00.116) 0:02:32.529 ************ 2025-05-05 00:59:11.818188 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:11.818212 | orchestrator | 2025-05-05 00:59:11.818237 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-05 00:59:11.818273 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.116) 0:02:32.645 ************ 2025-05-05 00:59:14.852483 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:14.852618 | orchestrator | 2025-05-05 00:59:14.852639 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-05 00:59:14.852655 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.119) 0:02:32.764 ************ 2025-05-05 00:59:14.852670 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:14.852685 | orchestrator | 2025-05-05 00:59:14.852699 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-05 00:59:14.852714 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.397) 0:02:33.162 ************ 2025-05-05 00:59:14.852728 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:14.852765 | orchestrator | 2025-05-05 00:59:14.852780 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-05 00:59:14.852794 | orchestrator | Monday 05 May 2025 00:59:10 +0000 (0:00:03.234) 0:02:36.396 ************ 2025-05-05 00:59:14.852808 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:14.852822 | orchestrator | skipping: [testbed-node-1] 2025-05-05 00:59:14.852836 | orchestrator | skipping: [testbed-node-2] 2025-05-05 00:59:14.852850 | orchestrator | 2025-05-05 00:59:14.852865 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:59:14.852881 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-05 00:59:14.852896 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-05 00:59:14.852911 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-05 00:59:14.852925 | orchestrator | 2025-05-05 00:59:14.852940 | orchestrator | 2025-05-05 00:59:14.852956 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:59:14.852972 | orchestrator | Monday 05 May 2025 00:59:11 +0000 (0:00:00.492) 0:02:36.889 ************ 2025-05-05 00:59:14.852988 | orchestrator | =============================================================================== 2025-05-05 00:59:14.853003 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.51s 2025-05-05 00:59:14.853020 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.52s 2025-05-05 00:59:14.853035 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.74s 2025-05-05 00:59:14.853051 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.45s 2025-05-05 00:59:14.853067 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.78s 2025-05-05 00:59:14.853105 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.84s 2025-05-05 00:59:14.853128 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.81s 2025-05-05 00:59:14.853145 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 8.28s 2025-05-05 00:59:14.853161 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.02s 2025-05-05 00:59:14.853323 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.53s 2025-05-05 00:59:14.853379 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.53s 2025-05-05 00:59:14.853405 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.37s 2025-05-05 00:59:14.853429 | orchestrator | keystone : Creating default user role ----------------------------------- 3.23s 2025-05-05 00:59:14.853450 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.17s 2025-05-05 00:59:14.853471 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 3.17s 2025-05-05 00:59:14.853486 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.80s 2025-05-05 00:59:14.853500 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2025-05-05 00:59:14.853514 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.48s 2025-05-05 00:59:14.853528 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.34s 2025-05-05 00:59:14.853542 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.24s 2025-05-05 00:59:14.853556 | orchestrator | 2025-05-05 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:14.853590 | orchestrator | 2025-05-05 00:59:14 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:14.857410 | orchestrator | 2025-05-05 00:59:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:14.857475 | orchestrator | 2025-05-05 00:59:14 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:14.858221 | orchestrator | 2025-05-05 00:59:14 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:14.859302 | orchestrator | 2025-05-05 00:59:14 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:59:14.860450 | orchestrator | 2025-05-05 00:59:14 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:14.861678 | orchestrator | 2025-05-05 00:59:14 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:17.898314 | orchestrator | 2025-05-05 00:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:17.898469 | orchestrator | 2025-05-05 00:59:17 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:17.901133 | orchestrator | 2025-05-05 00:59:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:17.903733 | orchestrator | 2025-05-05 00:59:17 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:17.905209 | orchestrator | 2025-05-05 00:59:17 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:17.906377 | orchestrator | 2025-05-05 00:59:17 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:59:17.907533 | orchestrator | 2025-05-05 00:59:17 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:17.908760 | orchestrator | 2025-05-05 00:59:17 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:17.909023 | orchestrator | 2025-05-05 00:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:20.938777 | orchestrator | 2025-05-05 00:59:20 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:20.939407 | orchestrator | 2025-05-05 00:59:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:20.939819 | orchestrator | 2025-05-05 00:59:20 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:20.939859 | orchestrator | 2025-05-05 00:59:20 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:20.940375 | orchestrator | 2025-05-05 00:59:20 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state STARTED 2025-05-05 00:59:20.941217 | orchestrator | 2025-05-05 00:59:20 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:20.941528 | orchestrator | 2025-05-05 00:59:20 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:23.991591 | orchestrator | 2025-05-05 00:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:23.991734 | orchestrator | 2025-05-05 00:59:23 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:23.994280 | orchestrator | 2025-05-05 00:59:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:23.995638 | orchestrator | 2025-05-05 00:59:23 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:23.997964 | orchestrator | 2025-05-05 00:59:23 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:23.999294 | orchestrator | 2025-05-05 00:59:23 | INFO  | Task 51f1f303-b295-4bed-b26c-e2a8e95151fa is in state SUCCESS 2025-05-05 00:59:24.000937 | orchestrator | 2025-05-05 00:59:24.000995 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-05 00:59:24.001011 | orchestrator | 2025-05-05 00:59:24.001026 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-05 00:59:24.001042 | orchestrator | 2025-05-05 00:59:24.001074 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-05 00:59:24.001089 | orchestrator | Monday 05 May 2025 00:58:56 +0000 (0:00:00.445) 0:00:00.445 ************ 2025-05-05 00:59:24.001103 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-05 00:59:24.001118 | orchestrator | 2025-05-05 00:59:24.001134 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-05 00:59:24.001159 | orchestrator | Monday 05 May 2025 00:58:56 +0000 (0:00:00.204) 0:00:00.650 ************ 2025-05-05 00:59:24.001183 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:59:24.001207 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-05 00:59:24.001230 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-05 00:59:24.001254 | orchestrator | 2025-05-05 00:59:24.001281 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-05 00:59:24.001310 | orchestrator | Monday 05 May 2025 00:58:57 +0000 (0:00:00.821) 0:00:01.472 ************ 2025-05-05 00:59:24.001337 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-05 00:59:24.001393 | orchestrator | 2025-05-05 00:59:24.001417 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-05 00:59:24.001432 | orchestrator | Monday 05 May 2025 00:58:57 +0000 (0:00:00.212) 0:00:01.685 ************ 2025-05-05 00:59:24.001447 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.001462 | orchestrator | 2025-05-05 00:59:24.001476 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-05 00:59:24.001490 | orchestrator | Monday 05 May 2025 00:58:58 +0000 (0:00:00.605) 0:00:02.290 ************ 2025-05-05 00:59:24.001530 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.001547 | orchestrator | 2025-05-05 00:59:24.001563 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-05 00:59:24.001579 | orchestrator | Monday 05 May 2025 00:58:58 +0000 (0:00:00.129) 0:00:02.419 ************ 2025-05-05 00:59:24.001595 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.001610 | orchestrator | 2025-05-05 00:59:24.001627 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-05 00:59:24.001643 | orchestrator | Monday 05 May 2025 00:58:58 +0000 (0:00:00.468) 0:00:02.888 ************ 2025-05-05 00:59:24.001659 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.001675 | orchestrator | 2025-05-05 00:59:24.001698 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-05 00:59:24.001713 | orchestrator | Monday 05 May 2025 00:58:59 +0000 (0:00:00.141) 0:00:03.029 ************ 2025-05-05 00:59:24.001729 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.001745 | orchestrator | 2025-05-05 00:59:24.001761 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-05 00:59:24.001776 | orchestrator | Monday 05 May 2025 00:58:59 +0000 (0:00:00.136) 0:00:03.165 ************ 2025-05-05 00:59:24.001792 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.001809 | orchestrator | 2025-05-05 00:59:24.001824 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-05 00:59:24.001840 | orchestrator | Monday 05 May 2025 00:58:59 +0000 (0:00:00.135) 0:00:03.301 ************ 2025-05-05 00:59:24.001856 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.001872 | orchestrator | 2025-05-05 00:59:24.001889 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-05 00:59:24.001903 | orchestrator | Monday 05 May 2025 00:58:59 +0000 (0:00:00.135) 0:00:03.437 ************ 2025-05-05 00:59:24.001917 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.001931 | orchestrator | 2025-05-05 00:59:24.001945 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-05 00:59:24.001959 | orchestrator | Monday 05 May 2025 00:58:59 +0000 (0:00:00.315) 0:00:03.753 ************ 2025-05-05 00:59:24.001973 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:59:24.001987 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:59:24.002001 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:59:24.002015 | orchestrator | 2025-05-05 00:59:24.002079 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-05 00:59:24.002095 | orchestrator | Monday 05 May 2025 00:59:00 +0000 (0:00:00.649) 0:00:04.402 ************ 2025-05-05 00:59:24.002109 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.002124 | orchestrator | 2025-05-05 00:59:24.002138 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-05 00:59:24.002152 | orchestrator | Monday 05 May 2025 00:59:00 +0000 (0:00:00.234) 0:00:04.637 ************ 2025-05-05 00:59:24.002167 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:59:24.002181 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:59:24.002195 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:59:24.002209 | orchestrator | 2025-05-05 00:59:24.002223 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-05 00:59:24.002237 | orchestrator | Monday 05 May 2025 00:59:02 +0000 (0:00:01.983) 0:00:06.620 ************ 2025-05-05 00:59:24.002251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:59:24.002265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:59:24.002279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:59:24.002294 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.002308 | orchestrator | 2025-05-05 00:59:24.002322 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-05 00:59:24.002441 | orchestrator | Monday 05 May 2025 00:59:03 +0000 (0:00:00.406) 0:00:07.027 ************ 2025-05-05 00:59:24.002469 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-05 00:59:24.002488 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-05 00:59:24.002504 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-05 00:59:24.002519 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.002534 | orchestrator | 2025-05-05 00:59:24.002549 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-05 00:59:24.002564 | orchestrator | Monday 05 May 2025 00:59:03 +0000 (0:00:00.784) 0:00:07.811 ************ 2025-05-05 00:59:24.002580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:59:24.002829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:59:24.002858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-05 00:59:24.002876 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.002890 | orchestrator | 2025-05-05 00:59:24.002904 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-05 00:59:24.002918 | orchestrator | Monday 05 May 2025 00:59:04 +0000 (0:00:00.172) 0:00:07.983 ************ 2025-05-05 00:59:24.002936 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '40b51300a323', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-05 00:59:01.332235', 'end': '2025-05-05 00:59:01.375080', 'delta': '0:00:00.042845', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['40b51300a323'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-05 00:59:24.002954 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '3359a2970920', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-05 00:59:01.900990', 'end': '2025-05-05 00:59:01.954677', 'delta': '0:00:00.053687', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3359a2970920'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-05 00:59:24.002995 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a163903501d3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-05 00:59:02.493293', 'end': '2025-05-05 00:59:02.536045', 'delta': '0:00:00.042752', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a163903501d3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-05 00:59:24.003011 | orchestrator | 2025-05-05 00:59:24.003025 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-05 00:59:24.003040 | orchestrator | Monday 05 May 2025 00:59:04 +0000 (0:00:00.212) 0:00:08.196 ************ 2025-05-05 00:59:24.003054 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.003069 | orchestrator | 2025-05-05 00:59:24.003083 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-05 00:59:24.003097 | orchestrator | Monday 05 May 2025 00:59:04 +0000 (0:00:00.243) 0:00:08.440 ************ 2025-05-05 00:59:24.003111 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-05 00:59:24.003125 | orchestrator | 2025-05-05 00:59:24.003140 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-05 00:59:24.003154 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:01.608) 0:00:10.049 ************ 2025-05-05 00:59:24.003168 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003182 | orchestrator | 2025-05-05 00:59:24.003203 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-05 00:59:24.003217 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:00.131) 0:00:10.180 ************ 2025-05-05 00:59:24.003231 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003245 | orchestrator | 2025-05-05 00:59:24.003259 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-05 00:59:24.003274 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:00.206) 0:00:10.387 ************ 2025-05-05 00:59:24.003288 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003302 | orchestrator | 2025-05-05 00:59:24.003317 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-05 00:59:24.003331 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:00.119) 0:00:10.507 ************ 2025-05-05 00:59:24.003367 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.003382 | orchestrator | 2025-05-05 00:59:24.003396 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-05 00:59:24.003410 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:00.130) 0:00:10.637 ************ 2025-05-05 00:59:24.003425 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003441 | orchestrator | 2025-05-05 00:59:24.003456 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-05 00:59:24.003473 | orchestrator | Monday 05 May 2025 00:59:06 +0000 (0:00:00.220) 0:00:10.858 ************ 2025-05-05 00:59:24.003488 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003504 | orchestrator | 2025-05-05 00:59:24.003520 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-05 00:59:24.003534 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.115) 0:00:10.974 ************ 2025-05-05 00:59:24.003548 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003562 | orchestrator | 2025-05-05 00:59:24.003576 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-05 00:59:24.003599 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.125) 0:00:11.099 ************ 2025-05-05 00:59:24.003613 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003627 | orchestrator | 2025-05-05 00:59:24.003641 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-05 00:59:24.003655 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.119) 0:00:11.218 ************ 2025-05-05 00:59:24.003668 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003682 | orchestrator | 2025-05-05 00:59:24.003696 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-05 00:59:24.003710 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.132) 0:00:11.351 ************ 2025-05-05 00:59:24.003724 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003738 | orchestrator | 2025-05-05 00:59:24.003752 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-05 00:59:24.003766 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.308) 0:00:11.660 ************ 2025-05-05 00:59:24.003780 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003794 | orchestrator | 2025-05-05 00:59:24.003808 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-05 00:59:24.003822 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.139) 0:00:11.800 ************ 2025-05-05 00:59:24.003836 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.003850 | orchestrator | 2025-05-05 00:59:24.003864 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-05 00:59:24.003878 | orchestrator | Monday 05 May 2025 00:59:07 +0000 (0:00:00.134) 0:00:11.934 ************ 2025-05-05 00:59:24.003892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.003915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.003931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.003945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.003965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.003980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.004008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.004023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-05 00:59:24.004049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part1', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part14', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part15', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part16', 'scsi-SQEMU_QEMU_HARDDISK_34b4e4a4-0893-4c21-853f-0a97d76ef819-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:59:24.004067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d84f93e-1c6d-4691-b492-2a4ac16c3944', 'scsi-SQEMU_QEMU_HARDDISK_3d84f93e-1c6d-4691-b492-2a4ac16c3944'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:59:24.004084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_538b5ef1-8671-4fc9-a3c4-cba69448f95c', 'scsi-SQEMU_QEMU_HARDDISK_538b5ef1-8671-4fc9-a3c4-cba69448f95c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:59:24.004106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b4716be-1a57-4f60-96f3-25458ff8018c', 'scsi-SQEMU_QEMU_HARDDISK_6b4716be-1a57-4f60-96f3-25458ff8018c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:59:24.004122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-05-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-05 00:59:24.004137 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004151 | orchestrator | 2025-05-05 00:59:24.004166 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-05 00:59:24.004180 | orchestrator | Monday 05 May 2025 00:59:08 +0000 (0:00:00.267) 0:00:12.202 ************ 2025-05-05 00:59:24.004194 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004208 | orchestrator | 2025-05-05 00:59:24.004223 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-05 00:59:24.004237 | orchestrator | Monday 05 May 2025 00:59:08 +0000 (0:00:00.233) 0:00:12.435 ************ 2025-05-05 00:59:24.004251 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004265 | orchestrator | 2025-05-05 00:59:24.004279 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-05 00:59:24.004293 | orchestrator | Monday 05 May 2025 00:59:08 +0000 (0:00:00.129) 0:00:12.565 ************ 2025-05-05 00:59:24.004307 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004321 | orchestrator | 2025-05-05 00:59:24.004335 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-05 00:59:24.004405 | orchestrator | Monday 05 May 2025 00:59:08 +0000 (0:00:00.124) 0:00:12.689 ************ 2025-05-05 00:59:24.004431 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.004447 | orchestrator | 2025-05-05 00:59:24.004461 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-05 00:59:24.004475 | orchestrator | Monday 05 May 2025 00:59:09 +0000 (0:00:00.520) 0:00:13.210 ************ 2025-05-05 00:59:24.004489 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.004504 | orchestrator | 2025-05-05 00:59:24.004518 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-05 00:59:24.004532 | orchestrator | Monday 05 May 2025 00:59:09 +0000 (0:00:00.123) 0:00:13.333 ************ 2025-05-05 00:59:24.004546 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.004560 | orchestrator | 2025-05-05 00:59:24.004574 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-05 00:59:24.004589 | orchestrator | Monday 05 May 2025 00:59:09 +0000 (0:00:00.473) 0:00:13.807 ************ 2025-05-05 00:59:24.004603 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.004624 | orchestrator | 2025-05-05 00:59:24.004639 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-05 00:59:24.004653 | orchestrator | Monday 05 May 2025 00:59:10 +0000 (0:00:00.308) 0:00:14.116 ************ 2025-05-05 00:59:24.004667 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004681 | orchestrator | 2025-05-05 00:59:24.004695 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-05 00:59:24.004709 | orchestrator | Monday 05 May 2025 00:59:10 +0000 (0:00:00.239) 0:00:14.355 ************ 2025-05-05 00:59:24.004723 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004737 | orchestrator | 2025-05-05 00:59:24.004752 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-05 00:59:24.004764 | orchestrator | Monday 05 May 2025 00:59:10 +0000 (0:00:00.140) 0:00:14.496 ************ 2025-05-05 00:59:24.004777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:59:24.004790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:59:24.004802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:59:24.004814 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004827 | orchestrator | 2025-05-05 00:59:24.004840 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-05 00:59:24.004852 | orchestrator | Monday 05 May 2025 00:59:11 +0000 (0:00:00.469) 0:00:14.965 ************ 2025-05-05 00:59:24.004865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:59:24.004878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:59:24.004890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:59:24.004903 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.004915 | orchestrator | 2025-05-05 00:59:24.004928 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-05 00:59:24.004940 | orchestrator | Monday 05 May 2025 00:59:11 +0000 (0:00:00.457) 0:00:15.422 ************ 2025-05-05 00:59:24.004953 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:59:24.004965 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-05 00:59:24.004978 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-05 00:59:24.004990 | orchestrator | 2025-05-05 00:59:24.005002 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-05 00:59:24.005015 | orchestrator | Monday 05 May 2025 00:59:12 +0000 (0:00:01.157) 0:00:16.580 ************ 2025-05-05 00:59:24.005027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:59:24.005040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:59:24.005052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:59:24.005064 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.005077 | orchestrator | 2025-05-05 00:59:24.005089 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-05 00:59:24.005102 | orchestrator | Monday 05 May 2025 00:59:12 +0000 (0:00:00.260) 0:00:16.841 ************ 2025-05-05 00:59:24.005114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-05 00:59:24.005126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-05 00:59:24.005139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-05 00:59:24.005151 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.005163 | orchestrator | 2025-05-05 00:59:24.005175 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-05 00:59:24.005188 | orchestrator | Monday 05 May 2025 00:59:13 +0000 (0:00:00.218) 0:00:17.060 ************ 2025-05-05 00:59:24.005201 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-05 00:59:24.005213 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-05 00:59:24.005226 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-05 00:59:24.005245 | orchestrator | 2025-05-05 00:59:24.005258 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-05 00:59:24.005270 | orchestrator | Monday 05 May 2025 00:59:13 +0000 (0:00:00.228) 0:00:17.288 ************ 2025-05-05 00:59:24.005283 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.005295 | orchestrator | 2025-05-05 00:59:24.005307 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-05 00:59:24.005320 | orchestrator | Monday 05 May 2025 00:59:13 +0000 (0:00:00.134) 0:00:17.422 ************ 2025-05-05 00:59:24.005332 | orchestrator | skipping: [testbed-node-0] 2025-05-05 00:59:24.005370 | orchestrator | 2025-05-05 00:59:24.005395 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-05 00:59:24.005419 | orchestrator | Monday 05 May 2025 00:59:13 +0000 (0:00:00.305) 0:00:17.728 ************ 2025-05-05 00:59:24.005444 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:59:24.005469 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:59:24.005482 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:59:24.005495 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-05 00:59:24.005513 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-05 00:59:24.005526 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-05 00:59:24.005539 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-05 00:59:24.005551 | orchestrator | 2025-05-05 00:59:24.005563 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-05 00:59:24.005576 | orchestrator | Monday 05 May 2025 00:59:14 +0000 (0:00:00.842) 0:00:18.571 ************ 2025-05-05 00:59:24.005588 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-05 00:59:24.005601 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-05 00:59:24.005613 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-05 00:59:24.005625 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-05 00:59:24.005638 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-05 00:59:24.005650 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-05 00:59:24.005663 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-05 00:59:24.005675 | orchestrator | 2025-05-05 00:59:24.005688 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-05 00:59:24.005700 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:01.697) 0:00:20.268 ************ 2025-05-05 00:59:24.005713 | orchestrator | ok: [testbed-node-0] 2025-05-05 00:59:24.005726 | orchestrator | 2025-05-05 00:59:24.005738 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-05 00:59:24.005751 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.392) 0:00:20.661 ************ 2025-05-05 00:59:24.005763 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 00:59:24.005784 | orchestrator | 2025-05-05 00:59:24.005805 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-05 00:59:24.005825 | orchestrator | Monday 05 May 2025 00:59:17 +0000 (0:00:00.563) 0:00:21.224 ************ 2025-05-05 00:59:24.005844 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-05 00:59:24.005864 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-05 00:59:24.005882 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-05 00:59:24.005912 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-05 00:59:24.005931 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-05 00:59:24.005951 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-05 00:59:24.005970 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-05 00:59:24.005989 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-05 00:59:24.006009 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-05 00:59:24.006067 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-05 00:59:24.006087 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-05 00:59:24.006109 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-05 00:59:24.006130 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-05 00:59:24.006151 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-05 00:59:24.006174 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-05 00:59:24.006196 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-05 00:59:24.006224 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-05 00:59:24.006246 | orchestrator | 2025-05-05 00:59:24.006269 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 00:59:24.006291 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-05 00:59:24.006315 | orchestrator | 2025-05-05 00:59:24.006336 | orchestrator | 2025-05-05 00:59:24.006386 | orchestrator | 2025-05-05 00:59:24.006407 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 00:59:24.006427 | orchestrator | Monday 05 May 2025 00:59:23 +0000 (0:00:06.057) 0:00:27.282 ************ 2025-05-05 00:59:24.006447 | orchestrator | =============================================================================== 2025-05-05 00:59:24.006468 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.06s 2025-05-05 00:59:24.006490 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.98s 2025-05-05 00:59:24.006511 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.70s 2025-05-05 00:59:24.006548 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.61s 2025-05-05 00:59:27.061545 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.16s 2025-05-05 00:59:27.061652 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.84s 2025-05-05 00:59:27.061663 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.82s 2025-05-05 00:59:27.061673 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.78s 2025-05-05 00:59:27.061682 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-05-05 00:59:27.061691 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.61s 2025-05-05 00:59:27.061699 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.56s 2025-05-05 00:59:27.061708 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.52s 2025-05-05 00:59:27.061717 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.47s 2025-05-05 00:59:27.061725 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.47s 2025-05-05 00:59:27.061734 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.47s 2025-05-05 00:59:27.061742 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.46s 2025-05-05 00:59:27.061773 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.41s 2025-05-05 00:59:27.061783 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.39s 2025-05-05 00:59:27.061791 | orchestrator | ceph-facts : set_fact ceph_release ceph_stable_release ------------------ 0.32s 2025-05-05 00:59:27.061800 | orchestrator | ceph-facts : set_fact build dedicated_devices from resolved symlinks ---- 0.31s 2025-05-05 00:59:27.061810 | orchestrator | 2025-05-05 00:59:24 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:27.061819 | orchestrator | 2025-05-05 00:59:24 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:27.061828 | orchestrator | 2025-05-05 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:27.061850 | orchestrator | 2025-05-05 00:59:27 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:27.062426 | orchestrator | 2025-05-05 00:59:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:27.063661 | orchestrator | 2025-05-05 00:59:27 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state STARTED 2025-05-05 00:59:27.065930 | orchestrator | 2025-05-05 00:59:27 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:27.067777 | orchestrator | 2025-05-05 00:59:27 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:27.068329 | orchestrator | 2025-05-05 00:59:27 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:30.119541 | orchestrator | 2025-05-05 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:30.119660 | orchestrator | 2025-05-05 00:59:30 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:30.121781 | orchestrator | 2025-05-05 00:59:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:30.123865 | orchestrator | 2025-05-05 00:59:30 | INFO  | Task ed23aeec-d9b1-4e14-b77b-2fc8e833558e is in state SUCCESS 2025-05-05 00:59:30.126094 | orchestrator | 2025-05-05 00:59:30 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:30.131444 | orchestrator | 2025-05-05 00:59:30 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:30.134576 | orchestrator | 2025-05-05 00:59:30 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:30.134618 | orchestrator | 2025-05-05 00:59:30 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:33.185092 | orchestrator | 2025-05-05 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:33.185278 | orchestrator | 2025-05-05 00:59:33 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:33.185803 | orchestrator | 2025-05-05 00:59:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:33.185844 | orchestrator | 2025-05-05 00:59:33 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:33.186604 | orchestrator | 2025-05-05 00:59:33 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:33.187249 | orchestrator | 2025-05-05 00:59:33 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:33.188015 | orchestrator | 2025-05-05 00:59:33 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:36.238450 | orchestrator | 2025-05-05 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:36.238622 | orchestrator | 2025-05-05 00:59:36 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:36.239479 | orchestrator | 2025-05-05 00:59:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:36.240959 | orchestrator | 2025-05-05 00:59:36 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:36.242239 | orchestrator | 2025-05-05 00:59:36 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:36.243304 | orchestrator | 2025-05-05 00:59:36 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:36.244523 | orchestrator | 2025-05-05 00:59:36 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:39.287397 | orchestrator | 2025-05-05 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:39.287530 | orchestrator | 2025-05-05 00:59:39 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:39.287674 | orchestrator | 2025-05-05 00:59:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:39.288575 | orchestrator | 2025-05-05 00:59:39 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:39.290366 | orchestrator | 2025-05-05 00:59:39 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:39.292498 | orchestrator | 2025-05-05 00:59:39 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:39.293703 | orchestrator | 2025-05-05 00:59:39 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:39.294093 | orchestrator | 2025-05-05 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:42.336753 | orchestrator | 2025-05-05 00:59:42 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:42.338663 | orchestrator | 2025-05-05 00:59:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:42.339645 | orchestrator | 2025-05-05 00:59:42 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:42.341064 | orchestrator | 2025-05-05 00:59:42 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:42.343164 | orchestrator | 2025-05-05 00:59:42 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:42.344611 | orchestrator | 2025-05-05 00:59:42 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:45.386737 | orchestrator | 2025-05-05 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:45.386892 | orchestrator | 2025-05-05 00:59:45 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:45.389015 | orchestrator | 2025-05-05 00:59:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:45.389075 | orchestrator | 2025-05-05 00:59:45 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:45.389113 | orchestrator | 2025-05-05 00:59:45 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:45.391569 | orchestrator | 2025-05-05 00:59:45 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:45.391623 | orchestrator | 2025-05-05 00:59:45 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:48.432948 | orchestrator | 2025-05-05 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:48.433123 | orchestrator | 2025-05-05 00:59:48 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:48.433499 | orchestrator | 2025-05-05 00:59:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:48.435863 | orchestrator | 2025-05-05 00:59:48 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:48.438214 | orchestrator | 2025-05-05 00:59:48 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:48.440063 | orchestrator | 2025-05-05 00:59:48 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:48.441564 | orchestrator | 2025-05-05 00:59:48 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:51.484700 | orchestrator | 2025-05-05 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:51.484834 | orchestrator | 2025-05-05 00:59:51 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:51.487931 | orchestrator | 2025-05-05 00:59:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:51.488381 | orchestrator | 2025-05-05 00:59:51 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:51.488411 | orchestrator | 2025-05-05 00:59:51 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:51.490711 | orchestrator | 2025-05-05 00:59:51 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:51.492030 | orchestrator | 2025-05-05 00:59:51 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:54.534765 | orchestrator | 2025-05-05 00:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:54.534974 | orchestrator | 2025-05-05 00:59:54 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:54.535442 | orchestrator | 2025-05-05 00:59:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:54.535478 | orchestrator | 2025-05-05 00:59:54 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:54.535879 | orchestrator | 2025-05-05 00:59:54 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:54.536316 | orchestrator | 2025-05-05 00:59:54 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:54.536894 | orchestrator | 2025-05-05 00:59:54 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 00:59:57.565106 | orchestrator | 2025-05-05 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 00:59:57.565420 | orchestrator | 2025-05-05 00:59:57 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 00:59:57.565922 | orchestrator | 2025-05-05 00:59:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 00:59:57.565960 | orchestrator | 2025-05-05 00:59:57 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 00:59:57.570512 | orchestrator | 2025-05-05 00:59:57 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 00:59:57.571295 | orchestrator | 2025-05-05 00:59:57 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 00:59:57.572471 | orchestrator | 2025-05-05 00:59:57 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:00.600591 | orchestrator | 2025-05-05 00:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:00.600736 | orchestrator | 2025-05-05 01:00:00 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:00.601440 | orchestrator | 2025-05-05 01:00:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:00.601999 | orchestrator | 2025-05-05 01:00:00 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 01:00:00.603109 | orchestrator | 2025-05-05 01:00:00 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:00.603813 | orchestrator | 2025-05-05 01:00:00 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:00.604536 | orchestrator | 2025-05-05 01:00:00 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:00.604584 | orchestrator | 2025-05-05 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:03.638625 | orchestrator | 2025-05-05 01:00:03 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:03.639291 | orchestrator | 2025-05-05 01:00:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:03.639588 | orchestrator | 2025-05-05 01:00:03 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 01:00:03.640042 | orchestrator | 2025-05-05 01:00:03 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:03.640522 | orchestrator | 2025-05-05 01:00:03 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:03.641022 | orchestrator | 2025-05-05 01:00:03 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:06.666667 | orchestrator | 2025-05-05 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:06.666791 | orchestrator | 2025-05-05 01:00:06 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:06.667134 | orchestrator | 2025-05-05 01:00:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:06.667168 | orchestrator | 2025-05-05 01:00:06 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 01:00:06.667644 | orchestrator | 2025-05-05 01:00:06 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:06.668111 | orchestrator | 2025-05-05 01:00:06 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:06.668769 | orchestrator | 2025-05-05 01:00:06 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:09.703852 | orchestrator | 2025-05-05 01:00:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:09.703975 | orchestrator | 2025-05-05 01:00:09 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:09.704496 | orchestrator | 2025-05-05 01:00:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:09.705277 | orchestrator | 2025-05-05 01:00:09 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 01:00:09.705717 | orchestrator | 2025-05-05 01:00:09 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:09.707322 | orchestrator | 2025-05-05 01:00:09 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:09.707978 | orchestrator | 2025-05-05 01:00:09 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:12.730669 | orchestrator | 2025-05-05 01:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:12.730778 | orchestrator | 2025-05-05 01:00:12 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:12.731309 | orchestrator | 2025-05-05 01:00:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:12.731321 | orchestrator | 2025-05-05 01:00:12 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 01:00:12.731331 | orchestrator | 2025-05-05 01:00:12 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:12.731761 | orchestrator | 2025-05-05 01:00:12 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:12.732282 | orchestrator | 2025-05-05 01:00:12 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:12.732441 | orchestrator | 2025-05-05 01:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:15.756534 | orchestrator | 2025-05-05 01:00:15 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:15.758383 | orchestrator | 2025-05-05 01:00:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:15.758435 | orchestrator | 2025-05-05 01:00:15 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 01:00:15.758906 | orchestrator | 2025-05-05 01:00:15 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:15.759558 | orchestrator | 2025-05-05 01:00:15 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:15.763175 | orchestrator | 2025-05-05 01:00:15 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:18.798077 | orchestrator | 2025-05-05 01:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:18.798198 | orchestrator | 2025-05-05 01:00:18 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:18.799218 | orchestrator | 2025-05-05 01:00:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:18.799511 | orchestrator | 2025-05-05 01:00:18 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state STARTED 2025-05-05 01:00:18.799966 | orchestrator | 2025-05-05 01:00:18 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:18.800457 | orchestrator | 2025-05-05 01:00:18 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:18.802253 | orchestrator | 2025-05-05 01:00:18 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:21.834645 | orchestrator | 2025-05-05 01:00:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:21.834783 | orchestrator | 2025-05-05 01:00:21 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:21.835583 | orchestrator | 2025-05-05 01:00:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:21.835617 | orchestrator | 2025-05-05 01:00:21 | INFO  | Task a210bc43-6012-4e3b-95f6-8549a7e218a3 is in state SUCCESS 2025-05-05 01:00:21.835927 | orchestrator | 2025-05-05 01:00:21.835955 | orchestrator | 2025-05-05 01:00:21.835970 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-05 01:00:21.835984 | orchestrator | 2025-05-05 01:00:21.836081 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-05 01:00:21.836099 | orchestrator | Monday 05 May 2025 00:58:47 +0000 (0:00:00.137) 0:00:00.137 ************ 2025-05-05 01:00:21.836113 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-05 01:00:21.836127 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-05 01:00:21.836165 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-05 01:00:21.836180 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-05 01:00:21.836194 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-05 01:00:21.836220 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-05 01:00:21.836235 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-05 01:00:21.836250 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-05 01:00:21.836265 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-05 01:00:21.836280 | orchestrator | 2025-05-05 01:00:21.836295 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-05 01:00:21.836310 | orchestrator | Monday 05 May 2025 00:58:50 +0000 (0:00:02.872) 0:00:03.010 ************ 2025-05-05 01:00:21.836325 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-05 01:00:21.836340 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-05 01:00:21.836381 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-05 01:00:21.836396 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-05 01:00:21.836410 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-05 01:00:21.836425 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-05 01:00:21.836439 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-05 01:00:21.836453 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-05 01:00:21.836467 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-05 01:00:21.836481 | orchestrator | 2025-05-05 01:00:21.836496 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-05 01:00:21.836510 | orchestrator | Monday 05 May 2025 00:58:50 +0000 (0:00:00.225) 0:00:03.236 ************ 2025-05-05 01:00:21.836524 | orchestrator | ok: [testbed-manager] => { 2025-05-05 01:00:21.836541 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-05 01:00:21.836557 | orchestrator | } 2025-05-05 01:00:21.836572 | orchestrator | 2025-05-05 01:00:21.836586 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-05 01:00:21.836600 | orchestrator | Monday 05 May 2025 00:58:51 +0000 (0:00:00.159) 0:00:03.395 ************ 2025-05-05 01:00:21.836614 | orchestrator | changed: [testbed-manager] 2025-05-05 01:00:21.836628 | orchestrator | 2025-05-05 01:00:21.836643 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-05 01:00:21.836657 | orchestrator | Monday 05 May 2025 00:59:23 +0000 (0:00:32.797) 0:00:36.193 ************ 2025-05-05 01:00:21.836671 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-05 01:00:21.836686 | orchestrator | 2025-05-05 01:00:21.836700 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-05 01:00:21.836807 | orchestrator | Monday 05 May 2025 00:59:24 +0000 (0:00:00.451) 0:00:36.645 ************ 2025-05-05 01:00:21.836826 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-05 01:00:21.836843 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-05 01:00:21.836871 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-05 01:00:21.836891 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-05 01:00:21.836907 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-05 01:00:21.836935 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-05 01:00:21.837567 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-05 01:00:21.837595 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-05 01:00:21.837609 | orchestrator | 2025-05-05 01:00:21.837624 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-05 01:00:21.837638 | orchestrator | Monday 05 May 2025 00:59:27 +0000 (0:00:02.846) 0:00:39.491 ************ 2025-05-05 01:00:21.837653 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:00:21.837667 | orchestrator | 2025-05-05 01:00:21.837689 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:00:21.837705 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 01:00:21.837719 | orchestrator | 2025-05-05 01:00:21.837733 | orchestrator | Monday 05 May 2025 00:59:27 +0000 (0:00:00.025) 0:00:39.517 ************ 2025-05-05 01:00:21.837747 | orchestrator | =============================================================================== 2025-05-05 01:00:21.837762 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 32.80s 2025-05-05 01:00:21.837776 | orchestrator | Check ceph keys --------------------------------------------------------- 2.87s 2025-05-05 01:00:21.837791 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.85s 2025-05-05 01:00:21.837805 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.45s 2025-05-05 01:00:21.837824 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.23s 2025-05-05 01:00:21.837838 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.16s 2025-05-05 01:00:21.837853 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-05-05 01:00:21.837867 | orchestrator | 2025-05-05 01:00:21.837882 | orchestrator | 2025-05-05 01:00:21 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:21.837900 | orchestrator | 2025-05-05 01:00:21 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:21.837921 | orchestrator | 2025-05-05 01:00:21 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:24.868247 | orchestrator | 2025-05-05 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:24.868338 | orchestrator | 2025-05-05 01:00:24 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:24.868571 | orchestrator | 2025-05-05 01:00:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:24.868598 | orchestrator | 2025-05-05 01:00:24 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:24.869240 | orchestrator | 2025-05-05 01:00:24 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:24.869772 | orchestrator | 2025-05-05 01:00:24 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:24.870318 | orchestrator | 2025-05-05 01:00:24 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:24.871089 | orchestrator | 2025-05-05 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:27.903758 | orchestrator | 2025-05-05 01:00:27 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:27.904225 | orchestrator | 2025-05-05 01:00:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:27.904271 | orchestrator | 2025-05-05 01:00:27 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:27.904511 | orchestrator | 2025-05-05 01:00:27 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:27.905126 | orchestrator | 2025-05-05 01:00:27 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:27.906147 | orchestrator | 2025-05-05 01:00:27 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:30.931738 | orchestrator | 2025-05-05 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:30.931859 | orchestrator | 2025-05-05 01:00:30 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:30.934191 | orchestrator | 2025-05-05 01:00:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:30.934283 | orchestrator | 2025-05-05 01:00:30 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:30.934774 | orchestrator | 2025-05-05 01:00:30 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:30.935379 | orchestrator | 2025-05-05 01:00:30 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:30.937016 | orchestrator | 2025-05-05 01:00:30 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:33.987063 | orchestrator | 2025-05-05 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:33.987207 | orchestrator | 2025-05-05 01:00:33 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:33.988062 | orchestrator | 2025-05-05 01:00:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:33.988111 | orchestrator | 2025-05-05 01:00:33 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:33.988621 | orchestrator | 2025-05-05 01:00:33 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:33.989492 | orchestrator | 2025-05-05 01:00:33 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:33.990411 | orchestrator | 2025-05-05 01:00:33 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:37.050666 | orchestrator | 2025-05-05 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:37.050753 | orchestrator | 2025-05-05 01:00:37 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:37.051082 | orchestrator | 2025-05-05 01:00:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:37.054214 | orchestrator | 2025-05-05 01:00:37 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:37.054716 | orchestrator | 2025-05-05 01:00:37 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:37.055548 | orchestrator | 2025-05-05 01:00:37 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:37.056608 | orchestrator | 2025-05-05 01:00:37 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:37.056841 | orchestrator | 2025-05-05 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:40.088297 | orchestrator | 2025-05-05 01:00:40 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:40.089155 | orchestrator | 2025-05-05 01:00:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:40.089390 | orchestrator | 2025-05-05 01:00:40 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:40.090134 | orchestrator | 2025-05-05 01:00:40 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:40.091950 | orchestrator | 2025-05-05 01:00:40 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:40.092279 | orchestrator | 2025-05-05 01:00:40 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:40.092413 | orchestrator | 2025-05-05 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:43.141007 | orchestrator | 2025-05-05 01:00:43 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:43.141205 | orchestrator | 2025-05-05 01:00:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:43.141790 | orchestrator | 2025-05-05 01:00:43 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:43.142314 | orchestrator | 2025-05-05 01:00:43 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:43.143890 | orchestrator | 2025-05-05 01:00:43 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:43.144510 | orchestrator | 2025-05-05 01:00:43 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:46.186948 | orchestrator | 2025-05-05 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:46.187077 | orchestrator | 2025-05-05 01:00:46 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:46.187565 | orchestrator | 2025-05-05 01:00:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:46.188110 | orchestrator | 2025-05-05 01:00:46 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:46.188765 | orchestrator | 2025-05-05 01:00:46 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:46.189303 | orchestrator | 2025-05-05 01:00:46 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:46.189987 | orchestrator | 2025-05-05 01:00:46 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:49.224854 | orchestrator | 2025-05-05 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:49.225000 | orchestrator | 2025-05-05 01:00:49 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:49.226824 | orchestrator | 2025-05-05 01:00:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:49.228814 | orchestrator | 2025-05-05 01:00:49 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:49.230916 | orchestrator | 2025-05-05 01:00:49 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:49.232516 | orchestrator | 2025-05-05 01:00:49 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:49.234444 | orchestrator | 2025-05-05 01:00:49 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:52.272719 | orchestrator | 2025-05-05 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:52.272881 | orchestrator | 2025-05-05 01:00:52 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:52.274218 | orchestrator | 2025-05-05 01:00:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:52.274266 | orchestrator | 2025-05-05 01:00:52 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:52.274850 | orchestrator | 2025-05-05 01:00:52 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:52.276804 | orchestrator | 2025-05-05 01:00:52 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:55.315480 | orchestrator | 2025-05-05 01:00:52 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:55.315585 | orchestrator | 2025-05-05 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:55.315620 | orchestrator | 2025-05-05 01:00:55 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:55.316095 | orchestrator | 2025-05-05 01:00:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:55.317105 | orchestrator | 2025-05-05 01:00:55 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:55.322307 | orchestrator | 2025-05-05 01:00:55 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:55.324697 | orchestrator | 2025-05-05 01:00:55 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:55.326619 | orchestrator | 2025-05-05 01:00:55 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state STARTED 2025-05-05 01:00:58.368670 | orchestrator | 2025-05-05 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:00:58.368887 | orchestrator | 2025-05-05 01:00:58 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:00:58.369856 | orchestrator | 2025-05-05 01:00:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:00:58.369898 | orchestrator | 2025-05-05 01:00:58 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:00:58.370643 | orchestrator | 2025-05-05 01:00:58 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:00:58.371471 | orchestrator | 2025-05-05 01:00:58 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:00:58.372207 | orchestrator | 2025-05-05 01:00:58 | INFO  | Task 2532f383-ba3f-4e1c-9c07-617c3100a124 is in state SUCCESS 2025-05-05 01:00:58.372591 | orchestrator | 2025-05-05 01:00:58.372621 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-05 01:00:58.372636 | orchestrator | 2025-05-05 01:00:58.372651 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-05 01:00:58.372666 | orchestrator | Monday 05 May 2025 00:59:30 +0000 (0:00:00.161) 0:00:00.161 ************ 2025-05-05 01:00:58.372680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-05 01:00:58.372709 | orchestrator | 2025-05-05 01:00:58.372724 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-05 01:00:58.372759 | orchestrator | Monday 05 May 2025 00:59:30 +0000 (0:00:00.221) 0:00:00.382 ************ 2025-05-05 01:00:58.372775 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-05 01:00:58.372789 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-05 01:00:58.372804 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-05 01:00:58.372818 | orchestrator | 2025-05-05 01:00:58.372832 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-05 01:00:58.372846 | orchestrator | Monday 05 May 2025 00:59:31 +0000 (0:00:01.158) 0:00:01.541 ************ 2025-05-05 01:00:58.372861 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-05 01:00:58.372875 | orchestrator | 2025-05-05 01:00:58.372889 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-05 01:00:58.372904 | orchestrator | Monday 05 May 2025 00:59:33 +0000 (0:00:01.081) 0:00:02.623 ************ 2025-05-05 01:00:58.372918 | orchestrator | changed: [testbed-manager] 2025-05-05 01:00:58.372939 | orchestrator | 2025-05-05 01:00:58.372954 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-05 01:00:58.372968 | orchestrator | Monday 05 May 2025 00:59:33 +0000 (0:00:00.860) 0:00:03.484 ************ 2025-05-05 01:00:58.372982 | orchestrator | changed: [testbed-manager] 2025-05-05 01:00:58.373002 | orchestrator | 2025-05-05 01:00:58.373016 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-05 01:00:58.373030 | orchestrator | Monday 05 May 2025 00:59:34 +0000 (0:00:00.967) 0:00:04.451 ************ 2025-05-05 01:00:58.373044 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-05 01:00:58.373059 | orchestrator | ok: [testbed-manager] 2025-05-05 01:00:58.373073 | orchestrator | 2025-05-05 01:00:58.373087 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-05 01:00:58.373101 | orchestrator | Monday 05 May 2025 01:00:12 +0000 (0:00:38.135) 0:00:42.587 ************ 2025-05-05 01:00:58.373115 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-05 01:00:58.373129 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-05 01:00:58.373143 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-05 01:00:58.373158 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-05 01:00:58.373172 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-05 01:00:58.373186 | orchestrator | 2025-05-05 01:00:58.373200 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-05 01:00:58.373216 | orchestrator | Monday 05 May 2025 01:00:15 +0000 (0:00:02.985) 0:00:45.573 ************ 2025-05-05 01:00:58.373232 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-05 01:00:58.373250 | orchestrator | 2025-05-05 01:00:58.373266 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-05 01:00:58.373283 | orchestrator | Monday 05 May 2025 01:00:16 +0000 (0:00:00.478) 0:00:46.052 ************ 2025-05-05 01:00:58.373297 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:00:58.373316 | orchestrator | 2025-05-05 01:00:58.373330 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-05 01:00:58.373344 | orchestrator | Monday 05 May 2025 01:00:16 +0000 (0:00:00.103) 0:00:46.155 ************ 2025-05-05 01:00:58.373432 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:00:58.373460 | orchestrator | 2025-05-05 01:00:58.373488 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-05 01:00:58.373517 | orchestrator | Monday 05 May 2025 01:00:16 +0000 (0:00:00.212) 0:00:46.368 ************ 2025-05-05 01:00:58.373542 | orchestrator | changed: [testbed-manager] 2025-05-05 01:00:58.373557 | orchestrator | 2025-05-05 01:00:58.373571 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-05 01:00:58.373585 | orchestrator | Monday 05 May 2025 01:00:18 +0000 (0:00:01.364) 0:00:47.732 ************ 2025-05-05 01:00:58.373610 | orchestrator | changed: [testbed-manager] 2025-05-05 01:00:58.373625 | orchestrator | 2025-05-05 01:00:58.373639 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-05 01:00:58.373653 | orchestrator | Monday 05 May 2025 01:00:18 +0000 (0:00:00.787) 0:00:48.519 ************ 2025-05-05 01:00:58.373667 | orchestrator | changed: [testbed-manager] 2025-05-05 01:00:58.373682 | orchestrator | 2025-05-05 01:00:58.373696 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-05 01:00:58.373710 | orchestrator | Monday 05 May 2025 01:00:19 +0000 (0:00:00.490) 0:00:49.010 ************ 2025-05-05 01:00:58.373725 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-05 01:00:58.373745 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-05 01:00:58.373759 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-05 01:00:58.373774 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-05 01:00:58.373788 | orchestrator | 2025-05-05 01:00:58.373803 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:00:58.373817 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-05 01:00:58.373832 | orchestrator | 2025-05-05 01:00:58.373858 | orchestrator | Monday 05 May 2025 01:00:20 +0000 (0:00:01.243) 0:00:50.253 ************ 2025-05-05 01:01:01.406702 | orchestrator | =============================================================================== 2025-05-05 01:01:01.406803 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.14s 2025-05-05 01:01:01.406822 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 2.99s 2025-05-05 01:01:01.406838 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.36s 2025-05-05 01:01:01.406852 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.24s 2025-05-05 01:01:01.406866 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.16s 2025-05-05 01:01:01.406881 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.08s 2025-05-05 01:01:01.406895 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2025-05-05 01:01:01.406910 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.86s 2025-05-05 01:01:01.406924 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2025-05-05 01:01:01.406938 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.49s 2025-05-05 01:01:01.406952 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2025-05-05 01:01:01.406966 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-05-05 01:01:01.406980 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.21s 2025-05-05 01:01:01.407100 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.10s 2025-05-05 01:01:01.407116 | orchestrator | 2025-05-05 01:01:01.407131 | orchestrator | 2025-05-05 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:01.407160 | orchestrator | 2025-05-05 01:01:01 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:01.407699 | orchestrator | 2025-05-05 01:01:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:01.407732 | orchestrator | 2025-05-05 01:01:01 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:01:01.408245 | orchestrator | 2025-05-05 01:01:01 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:01.409243 | orchestrator | 2025-05-05 01:01:01 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:04.441214 | orchestrator | 2025-05-05 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:04.441423 | orchestrator | 2025-05-05 01:01:04 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:04.441582 | orchestrator | 2025-05-05 01:01:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:04.442873 | orchestrator | 2025-05-05 01:01:04 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:01:04.443616 | orchestrator | 2025-05-05 01:01:04 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:04.446321 | orchestrator | 2025-05-05 01:01:04 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:07.479315 | orchestrator | 2025-05-05 01:01:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:07.479475 | orchestrator | 2025-05-05 01:01:07 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:07.482793 | orchestrator | 2025-05-05 01:01:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:07.484538 | orchestrator | 2025-05-05 01:01:07 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:01:07.484567 | orchestrator | 2025-05-05 01:01:07 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:07.484587 | orchestrator | 2025-05-05 01:01:07 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:10.517880 | orchestrator | 2025-05-05 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:10.518002 | orchestrator | 2025-05-05 01:01:10 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:10.518187 | orchestrator | 2025-05-05 01:01:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:10.518572 | orchestrator | 2025-05-05 01:01:10 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:01:10.521345 | orchestrator | 2025-05-05 01:01:10 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:10.521737 | orchestrator | 2025-05-05 01:01:10 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:13.542984 | orchestrator | 2025-05-05 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:13.543099 | orchestrator | 2025-05-05 01:01:13 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:13.543398 | orchestrator | 2025-05-05 01:01:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:13.543936 | orchestrator | 2025-05-05 01:01:13 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state STARTED 2025-05-05 01:01:13.544792 | orchestrator | 2025-05-05 01:01:13 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:13.545236 | orchestrator | 2025-05-05 01:01:13 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:13.545524 | orchestrator | 2025-05-05 01:01:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:16.569267 | orchestrator | 2025-05-05 01:01:16 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:16.569952 | orchestrator | 2025-05-05 01:01:16 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:16.570011 | orchestrator | 2025-05-05 01:01:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:16.570092 | orchestrator | 2025-05-05 01:01:16 | INFO  | Task 75508e18-9cb9-4cb6-b0d3-cc3a23a903f5 is in state SUCCESS 2025-05-05 01:01:16.571654 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-05 01:01:16.571737 | orchestrator | 2025-05-05 01:01:16.571756 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-05 01:01:16.571771 | orchestrator | 2025-05-05 01:01:16.571786 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-05 01:01:16.571801 | orchestrator | Monday 05 May 2025 01:00:23 +0000 (0:00:00.325) 0:00:00.325 ************ 2025-05-05 01:01:16.571815 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572286 | orchestrator | 2025-05-05 01:01:16.572312 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-05 01:01:16.572327 | orchestrator | Monday 05 May 2025 01:00:25 +0000 (0:00:01.486) 0:00:01.811 ************ 2025-05-05 01:01:16.572341 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572377 | orchestrator | 2025-05-05 01:01:16.572393 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-05 01:01:16.572407 | orchestrator | Monday 05 May 2025 01:00:25 +0000 (0:00:00.831) 0:00:02.643 ************ 2025-05-05 01:01:16.572421 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572442 | orchestrator | 2025-05-05 01:01:16.572464 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-05 01:01:16.572480 | orchestrator | Monday 05 May 2025 01:00:26 +0000 (0:00:00.814) 0:00:03.458 ************ 2025-05-05 01:01:16.572494 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572508 | orchestrator | 2025-05-05 01:01:16.572522 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-05 01:01:16.572536 | orchestrator | Monday 05 May 2025 01:00:27 +0000 (0:00:00.787) 0:00:04.245 ************ 2025-05-05 01:01:16.572549 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572563 | orchestrator | 2025-05-05 01:01:16.572577 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-05 01:01:16.572599 | orchestrator | Monday 05 May 2025 01:00:28 +0000 (0:00:00.882) 0:00:05.128 ************ 2025-05-05 01:01:16.572613 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572627 | orchestrator | 2025-05-05 01:01:16.572641 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-05 01:01:16.572655 | orchestrator | Monday 05 May 2025 01:00:29 +0000 (0:00:00.897) 0:00:06.025 ************ 2025-05-05 01:01:16.572669 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572683 | orchestrator | 2025-05-05 01:01:16.572697 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-05 01:01:16.572712 | orchestrator | Monday 05 May 2025 01:00:30 +0000 (0:00:01.456) 0:00:07.481 ************ 2025-05-05 01:01:16.572726 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572740 | orchestrator | 2025-05-05 01:01:16.572754 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-05 01:01:16.572768 | orchestrator | Monday 05 May 2025 01:00:31 +0000 (0:00:01.158) 0:00:08.639 ************ 2025-05-05 01:01:16.572781 | orchestrator | changed: [testbed-manager] 2025-05-05 01:01:16.572795 | orchestrator | 2025-05-05 01:01:16.572809 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-05 01:01:16.572823 | orchestrator | Monday 05 May 2025 01:00:50 +0000 (0:00:18.112) 0:00:26.752 ************ 2025-05-05 01:01:16.572837 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:01:16.572851 | orchestrator | 2025-05-05 01:01:16.572865 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-05 01:01:16.572879 | orchestrator | 2025-05-05 01:01:16.572893 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-05 01:01:16.572909 | orchestrator | Monday 05 May 2025 01:00:50 +0000 (0:00:00.726) 0:00:27.478 ************ 2025-05-05 01:01:16.572925 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.572941 | orchestrator | 2025-05-05 01:01:16.572957 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-05 01:01:16.572972 | orchestrator | 2025-05-05 01:01:16.573004 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-05 01:01:16.573021 | orchestrator | Monday 05 May 2025 01:00:52 +0000 (0:00:02.141) 0:00:29.620 ************ 2025-05-05 01:01:16.573036 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:01:16.573052 | orchestrator | 2025-05-05 01:01:16.573068 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-05 01:01:16.573083 | orchestrator | 2025-05-05 01:01:16.573099 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-05 01:01:16.573114 | orchestrator | Monday 05 May 2025 01:00:54 +0000 (0:00:01.783) 0:00:31.404 ************ 2025-05-05 01:01:16.573130 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:01:16.573147 | orchestrator | 2025-05-05 01:01:16.573163 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:01:16.573180 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-05 01:01:16.573197 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:01:16.573213 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:01:16.573230 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:01:16.573246 | orchestrator | 2025-05-05 01:01:16.573260 | orchestrator | 2025-05-05 01:01:16.573275 | orchestrator | 2025-05-05 01:01:16.573289 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:01:16.573303 | orchestrator | Monday 05 May 2025 01:00:56 +0000 (0:00:01.430) 0:00:32.834 ************ 2025-05-05 01:01:16.573317 | orchestrator | =============================================================================== 2025-05-05 01:01:16.573331 | orchestrator | Create admin user ------------------------------------------------------ 18.11s 2025-05-05 01:01:16.573426 | orchestrator | Restart ceph manager service -------------------------------------------- 5.36s 2025-05-05 01:01:16.573446 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.49s 2025-05-05 01:01:16.573461 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.46s 2025-05-05 01:01:16.573475 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.16s 2025-05-05 01:01:16.573490 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.90s 2025-05-05 01:01:16.573504 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.88s 2025-05-05 01:01:16.573518 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.83s 2025-05-05 01:01:16.573532 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.81s 2025-05-05 01:01:16.573547 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.79s 2025-05-05 01:01:16.573566 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.73s 2025-05-05 01:01:16.573581 | orchestrator | 2025-05-05 01:01:16.573595 | orchestrator | 2025-05-05 01:01:16.573609 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:01:16.573623 | orchestrator | 2025-05-05 01:01:16.573638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:01:16.573651 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.238) 0:00:00.238 ************ 2025-05-05 01:01:16.573665 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:01:16.573680 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:01:16.573694 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:01:16.573709 | orchestrator | 2025-05-05 01:01:16.573723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:01:16.573737 | orchestrator | Monday 05 May 2025 00:59:17 +0000 (0:00:00.446) 0:00:00.685 ************ 2025-05-05 01:01:16.573751 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-05 01:01:16.573773 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-05 01:01:16.573787 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-05 01:01:16.573802 | orchestrator | 2025-05-05 01:01:16.573816 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-05 01:01:16.573830 | orchestrator | 2025-05-05 01:01:16.573844 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-05 01:01:16.573858 | orchestrator | Monday 05 May 2025 00:59:17 +0000 (0:00:00.328) 0:00:01.013 ************ 2025-05-05 01:01:16.573872 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:01:16.573887 | orchestrator | 2025-05-05 01:01:16.573901 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-05 01:01:16.573915 | orchestrator | Monday 05 May 2025 00:59:17 +0000 (0:00:00.553) 0:00:01.567 ************ 2025-05-05 01:01:16.573929 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-05 01:01:16.573943 | orchestrator | 2025-05-05 01:01:16.573957 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-05 01:01:16.573972 | orchestrator | Monday 05 May 2025 00:59:21 +0000 (0:00:03.706) 0:00:05.273 ************ 2025-05-05 01:01:16.573986 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-05 01:01:16.574000 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-05 01:01:16.574014 | orchestrator | 2025-05-05 01:01:16.574075 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-05 01:01:16.574090 | orchestrator | Monday 05 May 2025 00:59:27 +0000 (0:00:06.252) 0:00:11.526 ************ 2025-05-05 01:01:16.574105 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:01:16.574119 | orchestrator | 2025-05-05 01:01:16.574133 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-05 01:01:16.574147 | orchestrator | Monday 05 May 2025 00:59:31 +0000 (0:00:03.387) 0:00:14.913 ************ 2025-05-05 01:01:16.574162 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:01:16.574176 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-05 01:01:16.574190 | orchestrator | 2025-05-05 01:01:16.574204 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-05 01:01:16.574218 | orchestrator | Monday 05 May 2025 00:59:35 +0000 (0:00:03.772) 0:00:18.685 ************ 2025-05-05 01:01:16.574232 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:01:16.574246 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-05 01:01:16.574260 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-05 01:01:16.574275 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-05 01:01:16.574289 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-05 01:01:16.574303 | orchestrator | 2025-05-05 01:01:16.574317 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-05 01:01:16.574331 | orchestrator | Monday 05 May 2025 00:59:50 +0000 (0:00:15.597) 0:00:34.283 ************ 2025-05-05 01:01:16.574346 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-05 01:01:16.574406 | orchestrator | 2025-05-05 01:01:16.574422 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-05 01:01:16.574436 | orchestrator | Monday 05 May 2025 00:59:54 +0000 (0:00:04.089) 0:00:38.372 ************ 2025-05-05 01:01:16.574462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.574492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.574508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.574524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.574540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.574571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.574587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.574602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.574617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.574630 | orchestrator | 2025-05-05 01:01:16.574643 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-05 01:01:16.574655 | orchestrator | Monday 05 May 2025 00:59:56 +0000 (0:00:01.972) 0:00:40.345 ************ 2025-05-05 01:01:16.574668 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-05 01:01:16.574681 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-05 01:01:16.574694 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-05 01:01:16.574706 | orchestrator | 2025-05-05 01:01:16.574719 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-05 01:01:16.574731 | orchestrator | Monday 05 May 2025 00:59:58 +0000 (0:00:02.044) 0:00:42.389 ************ 2025-05-05 01:01:16.574744 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:01:16.574757 | orchestrator | 2025-05-05 01:01:16.574769 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-05 01:01:16.574782 | orchestrator | Monday 05 May 2025 00:59:58 +0000 (0:00:00.116) 0:00:42.505 ************ 2025-05-05 01:01:16.574794 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:01:16.574807 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:01:16.574819 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:01:16.574832 | orchestrator | 2025-05-05 01:01:16.574844 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-05 01:01:16.574862 | orchestrator | Monday 05 May 2025 00:59:59 +0000 (0:00:00.310) 0:00:42.815 ************ 2025-05-05 01:01:16.574875 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:01:16.574888 | orchestrator | 2025-05-05 01:01:16.574900 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-05 01:01:16.574918 | orchestrator | Monday 05 May 2025 01:00:00 +0000 (0:00:01.318) 0:00:44.134 ************ 2025-05-05 01:01:16.574938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.574953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.574967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.574982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575072 | orchestrator | 2025-05-05 01:01:16.575085 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-05 01:01:16.575098 | orchestrator | Monday 05 May 2025 01:00:03 +0000 (0:00:03.431) 0:00:47.566 ************ 2025-05-05 01:01:16.575111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.575137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.575178 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:01:16.575192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575223 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:01:16.575243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.575257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575283 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:01:16.575296 | orchestrator | 2025-05-05 01:01:16.575309 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-05 01:01:16.575322 | orchestrator | Monday 05 May 2025 01:00:05 +0000 (0:00:01.703) 0:00:49.270 ************ 2025-05-05 01:01:16.575335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.575377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575410 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:01:16.575424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.575438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.575474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575488 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:01:16.575501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.575533 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:01:16.575546 | orchestrator | 2025-05-05 01:01:16.575558 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-05 01:01:16.575571 | orchestrator | Monday 05 May 2025 01:00:07 +0000 (0:00:01.317) 0:00:50.587 ************ 2025-05-05 01:01:16.575585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.575598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.575618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.575637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.575722 | orchestrator | 2025-05-05 01:01:16.575735 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-05 01:01:16.575748 | orchestrator | Monday 05 May 2025 01:00:10 +0000 (0:00:03.329) 0:00:53.916 ************ 2025-05-05 01:01:16.575761 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:01:16.575774 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.575786 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:01:16.575799 | orchestrator | 2025-05-05 01:01:16.575811 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-05 01:01:16.575824 | orchestrator | Monday 05 May 2025 01:00:12 +0000 (0:00:02.054) 0:00:55.970 ************ 2025-05-05 01:01:16.575841 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 01:01:16.575855 | orchestrator | 2025-05-05 01:01:16.575867 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-05 01:01:16.575880 | orchestrator | Monday 05 May 2025 01:00:14 +0000 (0:00:02.074) 0:00:58.045 ************ 2025-05-05 01:01:16.575892 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:01:16.575905 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:01:16.575917 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:01:16.575930 | orchestrator | 2025-05-05 01:01:16.575942 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-05 01:01:16.575954 | orchestrator | Monday 05 May 2025 01:00:16 +0000 (0:00:01.634) 0:00:59.679 ************ 2025-05-05 01:01:16.575967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.575988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.576002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.576016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576108 | orchestrator | 2025-05-05 01:01:16.576121 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-05 01:01:16.576134 | orchestrator | Monday 05 May 2025 01:00:27 +0000 (0:00:11.423) 0:01:11.103 ************ 2025-05-05 01:01:16.576152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.576167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.576186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.576200 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:01:16.576213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.576227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.576240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.576254 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:01:16.576273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-05 01:01:16.576292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.576306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:01:16.576319 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:01:16.576332 | orchestrator | 2025-05-05 01:01:16.576345 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-05 01:01:16.576397 | orchestrator | Monday 05 May 2025 01:00:28 +0000 (0:00:01.415) 0:01:12.518 ************ 2025-05-05 01:01:16.576412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.576432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.576454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-05 01:01:16.576469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:01:16.576589 | orchestrator | 2025-05-05 01:01:16.576603 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-05 01:01:16.576614 | orchestrator | Monday 05 May 2025 01:00:32 +0000 (0:00:03.831) 0:01:16.350 ************ 2025-05-05 01:01:16.576624 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:01:16.576635 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:01:16.576645 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:01:16.576655 | orchestrator | 2025-05-05 01:01:16.576666 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-05 01:01:16.576676 | orchestrator | Monday 05 May 2025 01:00:33 +0000 (0:00:00.571) 0:01:16.921 ************ 2025-05-05 01:01:16.576687 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.576697 | orchestrator | 2025-05-05 01:01:16.576707 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-05 01:01:16.576718 | orchestrator | Monday 05 May 2025 01:00:36 +0000 (0:00:02.854) 0:01:19.776 ************ 2025-05-05 01:01:16.576728 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.576738 | orchestrator | 2025-05-05 01:01:16.576748 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-05 01:01:16.576759 | orchestrator | Monday 05 May 2025 01:00:38 +0000 (0:00:02.308) 0:01:22.085 ************ 2025-05-05 01:01:16.576769 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.576786 | orchestrator | 2025-05-05 01:01:16.576799 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-05 01:01:16.576809 | orchestrator | Monday 05 May 2025 01:00:49 +0000 (0:00:10.750) 0:01:32.835 ************ 2025-05-05 01:01:16.576820 | orchestrator | 2025-05-05 01:01:16.576830 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-05 01:01:16.576840 | orchestrator | Monday 05 May 2025 01:00:49 +0000 (0:00:00.044) 0:01:32.880 ************ 2025-05-05 01:01:16.576850 | orchestrator | 2025-05-05 01:01:16.576860 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-05 01:01:16.576874 | orchestrator | Monday 05 May 2025 01:00:49 +0000 (0:00:00.135) 0:01:33.016 ************ 2025-05-05 01:01:16.576884 | orchestrator | 2025-05-05 01:01:16.576895 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-05 01:01:16.576905 | orchestrator | Monday 05 May 2025 01:00:49 +0000 (0:00:00.069) 0:01:33.086 ************ 2025-05-05 01:01:16.576915 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.576925 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:01:16.576939 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:01:16.576950 | orchestrator | 2025-05-05 01:01:16.576960 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-05 01:01:16.576971 | orchestrator | Monday 05 May 2025 01:00:57 +0000 (0:00:07.928) 0:01:41.014 ************ 2025-05-05 01:01:16.576981 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.576991 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:01:16.577001 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:01:16.577016 | orchestrator | 2025-05-05 01:01:16.577026 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-05 01:01:16.577037 | orchestrator | Monday 05 May 2025 01:01:03 +0000 (0:00:05.807) 0:01:46.822 ************ 2025-05-05 01:01:16.577047 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:01:16.577057 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:01:16.577067 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:01:16.577077 | orchestrator | 2025-05-05 01:01:16.577088 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:01:16.577098 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:01:16.577109 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 01:01:16.577120 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 01:01:16.577130 | orchestrator | 2025-05-05 01:01:16.577140 | orchestrator | 2025-05-05 01:01:16.577150 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:01:16.577165 | orchestrator | Monday 05 May 2025 01:01:14 +0000 (0:00:10.968) 0:01:57.790 ************ 2025-05-05 01:01:19.591714 | orchestrator | =============================================================================== 2025-05-05 01:01:19.591820 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.60s 2025-05-05 01:01:19.591838 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.42s 2025-05-05 01:01:19.591970 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.97s 2025-05-05 01:01:19.591992 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.75s 2025-05-05 01:01:19.592007 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.93s 2025-05-05 01:01:19.592021 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.25s 2025-05-05 01:01:19.592036 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.81s 2025-05-05 01:01:19.592051 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.09s 2025-05-05 01:01:19.592065 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.83s 2025-05-05 01:01:19.592080 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.77s 2025-05-05 01:01:19.592094 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.71s 2025-05-05 01:01:19.592108 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.43s 2025-05-05 01:01:19.592122 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.39s 2025-05-05 01:01:19.592136 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.33s 2025-05-05 01:01:19.592150 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.85s 2025-05-05 01:01:19.592165 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.31s 2025-05-05 01:01:19.592179 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.07s 2025-05-05 01:01:19.592193 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.05s 2025-05-05 01:01:19.592207 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.04s 2025-05-05 01:01:19.592221 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.97s 2025-05-05 01:01:19.592236 | orchestrator | 2025-05-05 01:01:16 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:19.592251 | orchestrator | 2025-05-05 01:01:16 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:19.592266 | orchestrator | 2025-05-05 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:19.592334 | orchestrator | 2025-05-05 01:01:19 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:19.592646 | orchestrator | 2025-05-05 01:01:19 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:19.592680 | orchestrator | 2025-05-05 01:01:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:19.593151 | orchestrator | 2025-05-05 01:01:19 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:19.593775 | orchestrator | 2025-05-05 01:01:19 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:22.617165 | orchestrator | 2025-05-05 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:22.617292 | orchestrator | 2025-05-05 01:01:22 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:22.617872 | orchestrator | 2025-05-05 01:01:22 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:22.617908 | orchestrator | 2025-05-05 01:01:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:22.618273 | orchestrator | 2025-05-05 01:01:22 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:22.619259 | orchestrator | 2025-05-05 01:01:22 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:25.665180 | orchestrator | 2025-05-05 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:25.665303 | orchestrator | 2025-05-05 01:01:25 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:25.667251 | orchestrator | 2025-05-05 01:01:25 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:25.670472 | orchestrator | 2025-05-05 01:01:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:25.671681 | orchestrator | 2025-05-05 01:01:25 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:25.673661 | orchestrator | 2025-05-05 01:01:25 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:28.707816 | orchestrator | 2025-05-05 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:28.707968 | orchestrator | 2025-05-05 01:01:28 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:28.708142 | orchestrator | 2025-05-05 01:01:28 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:28.708764 | orchestrator | 2025-05-05 01:01:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:28.709307 | orchestrator | 2025-05-05 01:01:28 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:28.711986 | orchestrator | 2025-05-05 01:01:28 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:31.748579 | orchestrator | 2025-05-05 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:31.748701 | orchestrator | 2025-05-05 01:01:31 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:31.749320 | orchestrator | 2025-05-05 01:01:31 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:31.749394 | orchestrator | 2025-05-05 01:01:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:31.750664 | orchestrator | 2025-05-05 01:01:31 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:31.751217 | orchestrator | 2025-05-05 01:01:31 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:34.780333 | orchestrator | 2025-05-05 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:34.780515 | orchestrator | 2025-05-05 01:01:34 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:34.783707 | orchestrator | 2025-05-05 01:01:34 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:34.784195 | orchestrator | 2025-05-05 01:01:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:34.784230 | orchestrator | 2025-05-05 01:01:34 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:34.784770 | orchestrator | 2025-05-05 01:01:34 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:37.817718 | orchestrator | 2025-05-05 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:37.817840 | orchestrator | 2025-05-05 01:01:37 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:37.818059 | orchestrator | 2025-05-05 01:01:37 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:37.818653 | orchestrator | 2025-05-05 01:01:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:37.820503 | orchestrator | 2025-05-05 01:01:37 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:37.821078 | orchestrator | 2025-05-05 01:01:37 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:40.843486 | orchestrator | 2025-05-05 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:40.843618 | orchestrator | 2025-05-05 01:01:40 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:40.845492 | orchestrator | 2025-05-05 01:01:40 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:40.845539 | orchestrator | 2025-05-05 01:01:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:40.845946 | orchestrator | 2025-05-05 01:01:40 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:40.846487 | orchestrator | 2025-05-05 01:01:40 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:40.846682 | orchestrator | 2025-05-05 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:43.872407 | orchestrator | 2025-05-05 01:01:43 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:43.872735 | orchestrator | 2025-05-05 01:01:43 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:43.873525 | orchestrator | 2025-05-05 01:01:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:43.874110 | orchestrator | 2025-05-05 01:01:43 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:43.874784 | orchestrator | 2025-05-05 01:01:43 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:43.874925 | orchestrator | 2025-05-05 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:46.904113 | orchestrator | 2025-05-05 01:01:46 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:46.904315 | orchestrator | 2025-05-05 01:01:46 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:46.905173 | orchestrator | 2025-05-05 01:01:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:46.906535 | orchestrator | 2025-05-05 01:01:46 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:46.908907 | orchestrator | 2025-05-05 01:01:46 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:49.946981 | orchestrator | 2025-05-05 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:49.947111 | orchestrator | 2025-05-05 01:01:49 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:49.948591 | orchestrator | 2025-05-05 01:01:49 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:49.949850 | orchestrator | 2025-05-05 01:01:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:49.951149 | orchestrator | 2025-05-05 01:01:49 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:49.952774 | orchestrator | 2025-05-05 01:01:49 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:53.011023 | orchestrator | 2025-05-05 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:53.011168 | orchestrator | 2025-05-05 01:01:53 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:53.012639 | orchestrator | 2025-05-05 01:01:53 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:53.012683 | orchestrator | 2025-05-05 01:01:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:53.014199 | orchestrator | 2025-05-05 01:01:53 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:53.015702 | orchestrator | 2025-05-05 01:01:53 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:56.055046 | orchestrator | 2025-05-05 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:56.055164 | orchestrator | 2025-05-05 01:01:56 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:56.055540 | orchestrator | 2025-05-05 01:01:56 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:56.057590 | orchestrator | 2025-05-05 01:01:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:56.058084 | orchestrator | 2025-05-05 01:01:56 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:56.058797 | orchestrator | 2025-05-05 01:01:56 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:01:59.099119 | orchestrator | 2025-05-05 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:01:59.099272 | orchestrator | 2025-05-05 01:01:59 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:01:59.101558 | orchestrator | 2025-05-05 01:01:59 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:01:59.103934 | orchestrator | 2025-05-05 01:01:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:01:59.105837 | orchestrator | 2025-05-05 01:01:59 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:01:59.107960 | orchestrator | 2025-05-05 01:01:59 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:02:02.156169 | orchestrator | 2025-05-05 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:02.156308 | orchestrator | 2025-05-05 01:02:02 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:02.161229 | orchestrator | 2025-05-05 01:02:02 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:02.162511 | orchestrator | 2025-05-05 01:02:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:02.163928 | orchestrator | 2025-05-05 01:02:02 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:02.165274 | orchestrator | 2025-05-05 01:02:02 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:02:05.209172 | orchestrator | 2025-05-05 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:05.209345 | orchestrator | 2025-05-05 01:02:05 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:05.210431 | orchestrator | 2025-05-05 01:02:05 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:05.211665 | orchestrator | 2025-05-05 01:02:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:05.213320 | orchestrator | 2025-05-05 01:02:05 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:05.216712 | orchestrator | 2025-05-05 01:02:05 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:02:05.216925 | orchestrator | 2025-05-05 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:08.269690 | orchestrator | 2025-05-05 01:02:08 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:08.270789 | orchestrator | 2025-05-05 01:02:08 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:08.272890 | orchestrator | 2025-05-05 01:02:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:08.275592 | orchestrator | 2025-05-05 01:02:08 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:08.277565 | orchestrator | 2025-05-05 01:02:08 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state STARTED 2025-05-05 01:02:11.329913 | orchestrator | 2025-05-05 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:11.330073 | orchestrator | 2025-05-05 01:02:11 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:11.330983 | orchestrator | 2025-05-05 01:02:11 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:11.333202 | orchestrator | 2025-05-05 01:02:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:11.333974 | orchestrator | 2025-05-05 01:02:11 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:11.336795 | orchestrator | 2025-05-05 01:02:11 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:11.340781 | orchestrator | 2025-05-05 01:02:11 | INFO  | Task 2b33ffbf-e073-44f1-bcde-3ed3d5dfa2c4 is in state SUCCESS 2025-05-05 01:02:11.344341 | orchestrator | 2025-05-05 01:02:11.344449 | orchestrator | 2025-05-05 01:02:11.344469 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:02:11.344485 | orchestrator | 2025-05-05 01:02:11.345442 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:02:11.345494 | orchestrator | Monday 05 May 2025 00:59:15 +0000 (0:00:00.738) 0:00:00.738 ************ 2025-05-05 01:02:11.345510 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:02:11.345526 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:02:11.345540 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:02:11.345580 | orchestrator | 2025-05-05 01:02:11.345595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:02:11.345610 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.465) 0:00:01.203 ************ 2025-05-05 01:02:11.345625 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-05 01:02:11.345640 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-05 01:02:11.345655 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-05 01:02:11.345669 | orchestrator | 2025-05-05 01:02:11.345683 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-05 01:02:11.345698 | orchestrator | 2025-05-05 01:02:11.345712 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-05 01:02:11.345726 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.266) 0:00:01.470 ************ 2025-05-05 01:02:11.345740 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:02:11.345756 | orchestrator | 2025-05-05 01:02:11.345770 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-05 01:02:11.345784 | orchestrator | Monday 05 May 2025 00:59:17 +0000 (0:00:00.657) 0:00:02.128 ************ 2025-05-05 01:02:11.345798 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-05 01:02:11.345812 | orchestrator | 2025-05-05 01:02:11.345827 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-05 01:02:11.345841 | orchestrator | Monday 05 May 2025 00:59:20 +0000 (0:00:03.825) 0:00:05.954 ************ 2025-05-05 01:02:11.345855 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-05 01:02:11.345870 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-05 01:02:11.345884 | orchestrator | 2025-05-05 01:02:11.345898 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-05 01:02:11.345913 | orchestrator | Monday 05 May 2025 00:59:27 +0000 (0:00:06.234) 0:00:12.189 ************ 2025-05-05 01:02:11.345927 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-05 01:02:11.345941 | orchestrator | 2025-05-05 01:02:11.345955 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-05 01:02:11.345969 | orchestrator | Monday 05 May 2025 00:59:30 +0000 (0:00:03.416) 0:00:15.605 ************ 2025-05-05 01:02:11.345984 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:02:11.345998 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-05 01:02:11.346012 | orchestrator | 2025-05-05 01:02:11.346096 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-05 01:02:11.346111 | orchestrator | Monday 05 May 2025 00:59:34 +0000 (0:00:04.081) 0:00:19.686 ************ 2025-05-05 01:02:11.346125 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:02:11.346140 | orchestrator | 2025-05-05 01:02:11.346154 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-05 01:02:11.346168 | orchestrator | Monday 05 May 2025 00:59:37 +0000 (0:00:03.123) 0:00:22.810 ************ 2025-05-05 01:02:11.346183 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-05 01:02:11.346197 | orchestrator | 2025-05-05 01:02:11.346211 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-05 01:02:11.346225 | orchestrator | Monday 05 May 2025 00:59:42 +0000 (0:00:04.202) 0:00:27.013 ************ 2025-05-05 01:02:11.346241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.346326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.346347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.346406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.346775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.346813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.346862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.346879 | orchestrator | 2025-05-05 01:02:11.346894 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-05 01:02:11.346909 | orchestrator | Monday 05 May 2025 00:59:45 +0000 (0:00:03.167) 0:00:30.180 ************ 2025-05-05 01:02:11.346923 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:11.346939 | orchestrator | 2025-05-05 01:02:11.346953 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-05 01:02:11.346967 | orchestrator | Monday 05 May 2025 00:59:45 +0000 (0:00:00.113) 0:00:30.293 ************ 2025-05-05 01:02:11.346982 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:11.346996 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:11.347010 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:11.347024 | orchestrator | 2025-05-05 01:02:11.347038 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-05 01:02:11.347053 | orchestrator | Monday 05 May 2025 00:59:45 +0000 (0:00:00.464) 0:00:30.758 ************ 2025-05-05 01:02:11.347067 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:02:11.347081 | orchestrator | 2025-05-05 01:02:11.347096 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-05 01:02:11.347110 | orchestrator | Monday 05 May 2025 00:59:46 +0000 (0:00:00.588) 0:00:31.346 ************ 2025-05-05 01:02:11.347125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.347140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.347162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.347208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.347545 | orchestrator | 2025-05-05 01:02:11.347560 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-05 01:02:11.347575 | orchestrator | Monday 05 May 2025 00:59:52 +0000 (0:00:06.350) 0:00:37.697 ************ 2025-05-05 01:02:11.347589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.347611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.347626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347730 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:11.347746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.347767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.347783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347873 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:11.347888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.347911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.347927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.347957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348018 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:11.348033 | orchestrator | 2025-05-05 01:02:11.348047 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-05 01:02:11.348107 | orchestrator | Monday 05 May 2025 00:59:54 +0000 (0:00:01.932) 0:00:39.629 ************ 2025-05-05 01:02:11.348125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.348148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.348163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348259 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:11.348274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.348296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.348311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348432 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:11.348447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.348471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.348486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.348592 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:11.348607 | orchestrator | 2025-05-05 01:02:11.348622 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-05 01:02:11.348636 | orchestrator | Monday 05 May 2025 00:59:56 +0000 (0:00:01.391) 0:00:41.020 ************ 2025-05-05 01:02:11.348651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.348666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.348682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.348697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.348969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349080 | orchestrator | 2025-05-05 01:02:11.349094 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-05 01:02:11.349108 | orchestrator | Monday 05 May 2025 01:00:02 +0000 (0:00:06.213) 0:00:47.233 ************ 2025-05-05 01:02:11.349122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.349138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.349188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.349206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349669 | orchestrator | 2025-05-05 01:02:11.349684 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-05 01:02:11.349698 | orchestrator | Monday 05 May 2025 01:00:22 +0000 (0:00:20.650) 0:01:07.884 ************ 2025-05-05 01:02:11.349713 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-05 01:02:11.349727 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-05 01:02:11.349742 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-05 01:02:11.349756 | orchestrator | 2025-05-05 01:02:11.349770 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-05 01:02:11.349784 | orchestrator | Monday 05 May 2025 01:00:31 +0000 (0:00:08.714) 0:01:16.598 ************ 2025-05-05 01:02:11.349799 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-05 01:02:11.349818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-05 01:02:11.349833 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-05 01:02:11.349847 | orchestrator | 2025-05-05 01:02:11.349861 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-05 01:02:11.349875 | orchestrator | Monday 05 May 2025 01:00:36 +0000 (0:00:05.364) 0:01:21.962 ************ 2025-05-05 01:02:11.349890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.349906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.349921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.349942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.349963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.349990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350262 | orchestrator | 2025-05-05 01:02:11.350275 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-05 01:02:11.350287 | orchestrator | Monday 05 May 2025 01:00:39 +0000 (0:00:03.017) 0:01:24.980 ************ 2025-05-05 01:02:11.350306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.350319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.350337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.350351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.350638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350651 | orchestrator | 2025-05-05 01:02:11.350664 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-05 01:02:11.350676 | orchestrator | Monday 05 May 2025 01:00:42 +0000 (0:00:02.732) 0:01:27.712 ************ 2025-05-05 01:02:11.350689 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:11.350702 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:11.350714 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:11.350727 | orchestrator | 2025-05-05 01:02:11.350739 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-05 01:02:11.350752 | orchestrator | Monday 05 May 2025 01:00:43 +0000 (0:00:00.658) 0:01:28.371 ************ 2025-05-05 01:02:11.350770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.350783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.350802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350873 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:11.350887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.350905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.350919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-05 01:02:11.350945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.350997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-05 01:02:11.351010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351023 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:11.351036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351112 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:11.351124 | orchestrator | 2025-05-05 01:02:11.351137 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-05 01:02:11.351150 | orchestrator | Monday 05 May 2025 01:00:44 +0000 (0:00:01.265) 0:01:29.637 ************ 2025-05-05 01:02:11.351163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.351176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.351189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-05 01:02:11.351209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-05 01:02:11.351508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-05 01:02:11.351522 | orchestrator | 2025-05-05 01:02:11.351535 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-05 01:02:11.351547 | orchestrator | Monday 05 May 2025 01:00:49 +0000 (0:00:04.837) 0:01:34.475 ************ 2025-05-05 01:02:11.351560 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:11.351573 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:11.351586 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:11.351598 | orchestrator | 2025-05-05 01:02:11.351611 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-05 01:02:11.351623 | orchestrator | Monday 05 May 2025 01:00:50 +0000 (0:00:00.638) 0:01:35.114 ************ 2025-05-05 01:02:11.351636 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-05 01:02:11.351648 | orchestrator | 2025-05-05 01:02:11.351661 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-05 01:02:11.351673 | orchestrator | Monday 05 May 2025 01:00:52 +0000 (0:00:02.478) 0:01:37.593 ************ 2025-05-05 01:02:11.351693 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 01:02:11.351706 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-05 01:02:11.351718 | orchestrator | 2025-05-05 01:02:11.351731 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-05 01:02:11.351743 | orchestrator | Monday 05 May 2025 01:00:54 +0000 (0:00:02.277) 0:01:39.870 ************ 2025-05-05 01:02:11.351756 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.351768 | orchestrator | 2025-05-05 01:02:11.351781 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-05 01:02:11.351793 | orchestrator | Monday 05 May 2025 01:01:09 +0000 (0:00:14.236) 0:01:54.106 ************ 2025-05-05 01:02:11.351805 | orchestrator | 2025-05-05 01:02:11.351818 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-05 01:02:11.351830 | orchestrator | Monday 05 May 2025 01:01:09 +0000 (0:00:00.108) 0:01:54.214 ************ 2025-05-05 01:02:11.351843 | orchestrator | 2025-05-05 01:02:11.351863 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-05 01:02:11.351880 | orchestrator | Monday 05 May 2025 01:01:09 +0000 (0:00:00.106) 0:01:54.320 ************ 2025-05-05 01:02:11.351893 | orchestrator | 2025-05-05 01:02:11.351906 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-05 01:02:11.351918 | orchestrator | Monday 05 May 2025 01:01:09 +0000 (0:00:00.083) 0:01:54.403 ************ 2025-05-05 01:02:11.351931 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.351944 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:11.351956 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:11.351969 | orchestrator | 2025-05-05 01:02:11.351981 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-05 01:02:11.351993 | orchestrator | Monday 05 May 2025 01:01:17 +0000 (0:00:07.651) 0:02:02.055 ************ 2025-05-05 01:02:11.352006 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:11.352018 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:11.352031 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.352043 | orchestrator | 2025-05-05 01:02:11.352056 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-05 01:02:11.352068 | orchestrator | Monday 05 May 2025 01:01:26 +0000 (0:00:08.954) 0:02:11.009 ************ 2025-05-05 01:02:11.352081 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.352093 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:11.352105 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:11.352117 | orchestrator | 2025-05-05 01:02:11.352130 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-05 01:02:11.352143 | orchestrator | Monday 05 May 2025 01:01:37 +0000 (0:00:11.406) 0:02:22.416 ************ 2025-05-05 01:02:11.352155 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.352167 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:11.352180 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:11.352192 | orchestrator | 2025-05-05 01:02:11.352204 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-05 01:02:11.352217 | orchestrator | Monday 05 May 2025 01:01:43 +0000 (0:00:06.388) 0:02:28.805 ************ 2025-05-05 01:02:11.352229 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:11.352242 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.352254 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:11.352267 | orchestrator | 2025-05-05 01:02:11.352279 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-05 01:02:11.352292 | orchestrator | Monday 05 May 2025 01:01:54 +0000 (0:00:10.749) 0:02:39.554 ************ 2025-05-05 01:02:11.352304 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.352317 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:11.352329 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:11.352342 | orchestrator | 2025-05-05 01:02:11.352354 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-05 01:02:11.352396 | orchestrator | Monday 05 May 2025 01:02:04 +0000 (0:00:10.379) 0:02:49.934 ************ 2025-05-05 01:02:11.352410 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:11.352422 | orchestrator | 2025-05-05 01:02:11.352434 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:02:11.352448 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:02:11.352461 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 01:02:11.352474 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 01:02:11.352486 | orchestrator | 2025-05-05 01:02:11.352499 | orchestrator | 2025-05-05 01:02:11.352511 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:02:11.352524 | orchestrator | Monday 05 May 2025 01:02:09 +0000 (0:00:04.950) 0:02:54.884 ************ 2025-05-05 01:02:11.352536 | orchestrator | =============================================================================== 2025-05-05 01:02:11.352549 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.65s 2025-05-05 01:02:11.352561 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.24s 2025-05-05 01:02:11.352573 | orchestrator | designate : Restart designate-central container ------------------------ 11.41s 2025-05-05 01:02:11.352586 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.75s 2025-05-05 01:02:11.352598 | orchestrator | designate : Restart designate-worker container ------------------------- 10.38s 2025-05-05 01:02:11.352611 | orchestrator | designate : Restart designate-api container ----------------------------- 8.95s 2025-05-05 01:02:11.352624 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.71s 2025-05-05 01:02:11.352636 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.65s 2025-05-05 01:02:11.352649 | orchestrator | designate : Restart designate-producer container ------------------------ 6.39s 2025-05-05 01:02:11.352661 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.35s 2025-05-05 01:02:11.352673 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.23s 2025-05-05 01:02:11.352686 | orchestrator | designate : Copying over config.json files for services ----------------- 6.21s 2025-05-05 01:02:11.352704 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.36s 2025-05-05 01:02:11.352716 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 4.95s 2025-05-05 01:02:11.352729 | orchestrator | designate : Check designate containers ---------------------------------- 4.84s 2025-05-05 01:02:11.352741 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.20s 2025-05-05 01:02:11.352754 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.08s 2025-05-05 01:02:11.352771 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.83s 2025-05-05 01:02:14.402427 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.42s 2025-05-05 01:02:14.402560 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.17s 2025-05-05 01:02:14.402603 | orchestrator | 2025-05-05 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:14.402648 | orchestrator | 2025-05-05 01:02:14 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:14.405232 | orchestrator | 2025-05-05 01:02:14 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:14.407651 | orchestrator | 2025-05-05 01:02:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:14.411129 | orchestrator | 2025-05-05 01:02:14 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:14.414216 | orchestrator | 2025-05-05 01:02:14 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:14.414971 | orchestrator | 2025-05-05 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:17.469868 | orchestrator | 2025-05-05 01:02:17 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:17.470511 | orchestrator | 2025-05-05 01:02:17 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:17.472420 | orchestrator | 2025-05-05 01:02:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:17.473937 | orchestrator | 2025-05-05 01:02:17 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:17.476863 | orchestrator | 2025-05-05 01:02:17 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:17.477313 | orchestrator | 2025-05-05 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:20.522572 | orchestrator | 2025-05-05 01:02:20 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:20.523654 | orchestrator | 2025-05-05 01:02:20 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:20.524913 | orchestrator | 2025-05-05 01:02:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:20.527160 | orchestrator | 2025-05-05 01:02:20 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:20.528834 | orchestrator | 2025-05-05 01:02:20 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:20.529078 | orchestrator | 2025-05-05 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:23.586967 | orchestrator | 2025-05-05 01:02:23 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:23.588499 | orchestrator | 2025-05-05 01:02:23 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:23.589638 | orchestrator | 2025-05-05 01:02:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:23.590956 | orchestrator | 2025-05-05 01:02:23 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:23.592457 | orchestrator | 2025-05-05 01:02:23 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:26.652133 | orchestrator | 2025-05-05 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:26.652269 | orchestrator | 2025-05-05 01:02:26 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state STARTED 2025-05-05 01:02:26.654228 | orchestrator | 2025-05-05 01:02:26 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:26.655902 | orchestrator | 2025-05-05 01:02:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:26.657309 | orchestrator | 2025-05-05 01:02:26 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:26.659018 | orchestrator | 2025-05-05 01:02:26 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:29.720005 | orchestrator | 2025-05-05 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:29.720156 | orchestrator | 2025-05-05 01:02:29 | INFO  | Task fd1fa6b9-a503-409c-8379-42c7930302d1 is in state SUCCESS 2025-05-05 01:02:29.721882 | orchestrator | 2025-05-05 01:02:29.721930 | orchestrator | 2025-05-05 01:02:29.721945 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:02:29.721987 | orchestrator | 2025-05-05 01:02:29.722006 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:02:29.722078 | orchestrator | Monday 05 May 2025 01:01:17 +0000 (0:00:00.213) 0:00:00.213 ************ 2025-05-05 01:02:29.722094 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:02:29.722110 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:02:29.722125 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:02:29.722157 | orchestrator | 2025-05-05 01:02:29.722172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:02:29.722187 | orchestrator | Monday 05 May 2025 01:01:18 +0000 (0:00:00.524) 0:00:00.738 ************ 2025-05-05 01:02:29.722202 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-05 01:02:29.722216 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-05 01:02:29.722237 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-05 01:02:29.722260 | orchestrator | 2025-05-05 01:02:29.722285 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-05 01:02:29.722308 | orchestrator | 2025-05-05 01:02:29.722332 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-05 01:02:29.722356 | orchestrator | Monday 05 May 2025 01:01:19 +0000 (0:00:00.526) 0:00:01.264 ************ 2025-05-05 01:02:29.722417 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:02:29.722442 | orchestrator | 2025-05-05 01:02:29.722465 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-05 01:02:29.722486 | orchestrator | Monday 05 May 2025 01:01:20 +0000 (0:00:01.465) 0:00:02.729 ************ 2025-05-05 01:02:29.722504 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-05 01:02:29.722520 | orchestrator | 2025-05-05 01:02:29.722536 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-05 01:02:29.722552 | orchestrator | Monday 05 May 2025 01:01:23 +0000 (0:00:03.360) 0:00:06.089 ************ 2025-05-05 01:02:29.722568 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-05 01:02:29.722689 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-05 01:02:29.722711 | orchestrator | 2025-05-05 01:02:29.722728 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-05 01:02:29.722745 | orchestrator | Monday 05 May 2025 01:01:30 +0000 (0:00:06.465) 0:00:12.555 ************ 2025-05-05 01:02:29.722761 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:02:29.722783 | orchestrator | 2025-05-05 01:02:29.722806 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-05 01:02:29.722831 | orchestrator | Monday 05 May 2025 01:01:33 +0000 (0:00:03.326) 0:00:15.882 ************ 2025-05-05 01:02:29.722856 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:02:29.722881 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-05 01:02:29.722905 | orchestrator | 2025-05-05 01:02:29.722928 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-05 01:02:29.722953 | orchestrator | Monday 05 May 2025 01:01:37 +0000 (0:00:03.778) 0:00:19.661 ************ 2025-05-05 01:02:29.722982 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:02:29.723000 | orchestrator | 2025-05-05 01:02:29.723014 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-05 01:02:29.723029 | orchestrator | Monday 05 May 2025 01:01:40 +0000 (0:00:03.148) 0:00:22.809 ************ 2025-05-05 01:02:29.723044 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-05 01:02:29.723058 | orchestrator | 2025-05-05 01:02:29.723073 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-05 01:02:29.723087 | orchestrator | Monday 05 May 2025 01:01:44 +0000 (0:00:04.209) 0:00:27.019 ************ 2025-05-05 01:02:29.723117 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:29.723134 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:29.723149 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:29.723163 | orchestrator | 2025-05-05 01:02:29.723178 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-05 01:02:29.723200 | orchestrator | Monday 05 May 2025 01:01:45 +0000 (0:00:00.366) 0:00:27.385 ************ 2025-05-05 01:02:29.723217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.723255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.723272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.723286 | orchestrator | 2025-05-05 01:02:29.723301 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-05 01:02:29.723315 | orchestrator | Monday 05 May 2025 01:01:46 +0000 (0:00:01.345) 0:00:28.731 ************ 2025-05-05 01:02:29.723330 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:29.723344 | orchestrator | 2025-05-05 01:02:29.723392 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-05 01:02:29.723409 | orchestrator | Monday 05 May 2025 01:01:46 +0000 (0:00:00.081) 0:00:28.813 ************ 2025-05-05 01:02:29.723423 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:29.723444 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:29.723460 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:29.723482 | orchestrator | 2025-05-05 01:02:29.723497 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-05 01:02:29.723511 | orchestrator | Monday 05 May 2025 01:01:46 +0000 (0:00:00.260) 0:00:29.074 ************ 2025-05-05 01:02:29.723526 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:02:29.723540 | orchestrator | 2025-05-05 01:02:29.723554 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-05 01:02:29.723569 | orchestrator | Monday 05 May 2025 01:01:47 +0000 (0:00:00.428) 0:00:29.503 ************ 2025-05-05 01:02:29.723584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.723643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.723662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.723677 | orchestrator | 2025-05-05 01:02:29.723692 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-05 01:02:29.723712 | orchestrator | Monday 05 May 2025 01:01:48 +0000 (0:00:01.476) 0:00:30.979 ************ 2025-05-05 01:02:29.723727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.723748 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:29.723764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.723901 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:29.723947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.723964 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:29.723978 | orchestrator | 2025-05-05 01:02:29.723993 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-05 01:02:29.724007 | orchestrator | Monday 05 May 2025 01:01:49 +0000 (0:00:00.506) 0:00:31.486 ************ 2025-05-05 01:02:29.724021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.724036 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:29.724051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.724074 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:29.724089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.724104 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:29.724118 | orchestrator | 2025-05-05 01:02:29.724133 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-05 01:02:29.724147 | orchestrator | Monday 05 May 2025 01:01:50 +0000 (0:00:00.742) 0:00:32.228 ************ 2025-05-05 01:02:29.724180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724233 | orchestrator | 2025-05-05 01:02:29.724248 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-05 01:02:29.724262 | orchestrator | Monday 05 May 2025 01:01:51 +0000 (0:00:01.528) 0:00:33.757 ************ 2025-05-05 01:02:29.724277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724341 | orchestrator | 2025-05-05 01:02:29.724355 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-05 01:02:29.724403 | orchestrator | Monday 05 May 2025 01:01:54 +0000 (0:00:02.710) 0:00:36.468 ************ 2025-05-05 01:02:29.724418 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-05 01:02:29.724433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-05 01:02:29.724447 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-05 01:02:29.724462 | orchestrator | 2025-05-05 01:02:29.724476 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-05 01:02:29.724490 | orchestrator | Monday 05 May 2025 01:01:55 +0000 (0:00:01.735) 0:00:38.203 ************ 2025-05-05 01:02:29.724505 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:29.724519 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:29.724533 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:29.724547 | orchestrator | 2025-05-05 01:02:29.724561 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-05 01:02:29.724575 | orchestrator | Monday 05 May 2025 01:01:57 +0000 (0:00:01.830) 0:00:40.034 ************ 2025-05-05 01:02:29.724589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.724604 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:02:29.724619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.724633 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:02:29.724670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-05 01:02:29.724693 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:02:29.724708 | orchestrator | 2025-05-05 01:02:29.724723 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-05 01:02:29.724737 | orchestrator | Monday 05 May 2025 01:01:58 +0000 (0:00:00.571) 0:00:40.606 ************ 2025-05-05 01:02:29.724752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-05 01:02:29.724797 | orchestrator | 2025-05-05 01:02:29.724812 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-05 01:02:29.724826 | orchestrator | Monday 05 May 2025 01:01:59 +0000 (0:00:01.266) 0:00:41.872 ************ 2025-05-05 01:02:29.724840 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:29.724854 | orchestrator | 2025-05-05 01:02:29.724868 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-05 01:02:29.724883 | orchestrator | Monday 05 May 2025 01:02:02 +0000 (0:00:02.408) 0:00:44.281 ************ 2025-05-05 01:02:29.724897 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:29.724911 | orchestrator | 2025-05-05 01:02:29.724925 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-05 01:02:29.724940 | orchestrator | Monday 05 May 2025 01:02:04 +0000 (0:00:02.269) 0:00:46.550 ************ 2025-05-05 01:02:29.724970 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:29.725192 | orchestrator | 2025-05-05 01:02:29.725217 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-05 01:02:29.725238 | orchestrator | Monday 05 May 2025 01:02:17 +0000 (0:00:12.908) 0:00:59.459 ************ 2025-05-05 01:02:29.725253 | orchestrator | 2025-05-05 01:02:29.725267 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-05 01:02:29.725281 | orchestrator | Monday 05 May 2025 01:02:17 +0000 (0:00:00.074) 0:00:59.533 ************ 2025-05-05 01:02:29.725295 | orchestrator | 2025-05-05 01:02:29.725309 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-05 01:02:29.725323 | orchestrator | Monday 05 May 2025 01:02:17 +0000 (0:00:00.224) 0:00:59.757 ************ 2025-05-05 01:02:29.725337 | orchestrator | 2025-05-05 01:02:29.725351 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-05 01:02:29.725426 | orchestrator | Monday 05 May 2025 01:02:17 +0000 (0:00:00.060) 0:00:59.818 ************ 2025-05-05 01:02:29.725443 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:02:29.725458 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:02:29.725472 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:02:29.725484 | orchestrator | 2025-05-05 01:02:29.725495 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:02:29.725506 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-05 01:02:29.725518 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 01:02:29.725528 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-05 01:02:29.725539 | orchestrator | 2025-05-05 01:02:29.725549 | orchestrator | 2025-05-05 01:02:29.725559 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:02:29.725570 | orchestrator | Monday 05 May 2025 01:02:27 +0000 (0:00:10.203) 0:01:10.022 ************ 2025-05-05 01:02:29.725580 | orchestrator | =============================================================================== 2025-05-05 01:02:29.725590 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.91s 2025-05-05 01:02:29.725601 | orchestrator | placement : Restart placement-api container ---------------------------- 10.20s 2025-05-05 01:02:29.725611 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.47s 2025-05-05 01:02:29.725621 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.21s 2025-05-05 01:02:29.725631 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.78s 2025-05-05 01:02:29.725642 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.36s 2025-05-05 01:02:29.725652 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.33s 2025-05-05 01:02:29.725662 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.15s 2025-05-05 01:02:29.725673 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.71s 2025-05-05 01:02:29.725683 | orchestrator | placement : Creating placement databases -------------------------------- 2.41s 2025-05-05 01:02:29.725694 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.27s 2025-05-05 01:02:29.725704 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.83s 2025-05-05 01:02:29.725714 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.74s 2025-05-05 01:02:29.725724 | orchestrator | placement : Copying over config.json files for services ----------------- 1.53s 2025-05-05 01:02:29.725734 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.48s 2025-05-05 01:02:29.725745 | orchestrator | placement : include_tasks ----------------------------------------------- 1.47s 2025-05-05 01:02:29.725763 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.35s 2025-05-05 01:02:29.725773 | orchestrator | placement : Check placement containers ---------------------------------- 1.27s 2025-05-05 01:02:29.725784 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.74s 2025-05-05 01:02:29.725794 | orchestrator | placement : Copying over existing policy file --------------------------- 0.57s 2025-05-05 01:02:29.725804 | orchestrator | 2025-05-05 01:02:29 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:29.725815 | orchestrator | 2025-05-05 01:02:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:29.725827 | orchestrator | 2025-05-05 01:02:29 | INFO  | Task e6675b71-210f-4f84-8ee4-0a0a539c9062 is in state STARTED 2025-05-05 01:02:29.725844 | orchestrator | 2025-05-05 01:02:29 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:29.726722 | orchestrator | 2025-05-05 01:02:29 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:32.782774 | orchestrator | 2025-05-05 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:32.782949 | orchestrator | 2025-05-05 01:02:32 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:32.783516 | orchestrator | 2025-05-05 01:02:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:32.783568 | orchestrator | 2025-05-05 01:02:32 | INFO  | Task e6675b71-210f-4f84-8ee4-0a0a539c9062 is in state STARTED 2025-05-05 01:02:32.784284 | orchestrator | 2025-05-05 01:02:32 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:32.785548 | orchestrator | 2025-05-05 01:02:32 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:35.819063 | orchestrator | 2025-05-05 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:35.819186 | orchestrator | 2025-05-05 01:02:35 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:35.819633 | orchestrator | 2025-05-05 01:02:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:35.820476 | orchestrator | 2025-05-05 01:02:35 | INFO  | Task e6675b71-210f-4f84-8ee4-0a0a539c9062 is in state SUCCESS 2025-05-05 01:02:35.821839 | orchestrator | 2025-05-05 01:02:35 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:35.823067 | orchestrator | 2025-05-05 01:02:35 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:35.823885 | orchestrator | 2025-05-05 01:02:35 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:02:38.865752 | orchestrator | 2025-05-05 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:38.865816 | orchestrator | 2025-05-05 01:02:38 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:38.869168 | orchestrator | 2025-05-05 01:02:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:38.872974 | orchestrator | 2025-05-05 01:02:38 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:38.876742 | orchestrator | 2025-05-05 01:02:38 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:38.878169 | orchestrator | 2025-05-05 01:02:38 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:02:38.878253 | orchestrator | 2025-05-05 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:41.925726 | orchestrator | 2025-05-05 01:02:41 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:41.928444 | orchestrator | 2025-05-05 01:02:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:41.930308 | orchestrator | 2025-05-05 01:02:41 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:41.931664 | orchestrator | 2025-05-05 01:02:41 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:41.932795 | orchestrator | 2025-05-05 01:02:41 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:02:44.977553 | orchestrator | 2025-05-05 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:44.977717 | orchestrator | 2025-05-05 01:02:44 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:44.977913 | orchestrator | 2025-05-05 01:02:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:44.977938 | orchestrator | 2025-05-05 01:02:44 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:44.977959 | orchestrator | 2025-05-05 01:02:44 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:44.978782 | orchestrator | 2025-05-05 01:02:44 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:02:48.021995 | orchestrator | 2025-05-05 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:48.022284 | orchestrator | 2025-05-05 01:02:48 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:48.024672 | orchestrator | 2025-05-05 01:02:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:48.024719 | orchestrator | 2025-05-05 01:02:48 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:48.025508 | orchestrator | 2025-05-05 01:02:48 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:48.025542 | orchestrator | 2025-05-05 01:02:48 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:02:51.069235 | orchestrator | 2025-05-05 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:51.069444 | orchestrator | 2025-05-05 01:02:51 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:51.070932 | orchestrator | 2025-05-05 01:02:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:51.072815 | orchestrator | 2025-05-05 01:02:51 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:51.076132 | orchestrator | 2025-05-05 01:02:51 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:51.076882 | orchestrator | 2025-05-05 01:02:51 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:02:54.127070 | orchestrator | 2025-05-05 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:54.127275 | orchestrator | 2025-05-05 01:02:54 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:54.127838 | orchestrator | 2025-05-05 01:02:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:54.127872 | orchestrator | 2025-05-05 01:02:54 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:54.130248 | orchestrator | 2025-05-05 01:02:54 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:57.158312 | orchestrator | 2025-05-05 01:02:54 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:02:57.158466 | orchestrator | 2025-05-05 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:02:57.158502 | orchestrator | 2025-05-05 01:02:57 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:02:57.159089 | orchestrator | 2025-05-05 01:02:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:02:57.159145 | orchestrator | 2025-05-05 01:02:57 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:02:57.159569 | orchestrator | 2025-05-05 01:02:57 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:02:57.160377 | orchestrator | 2025-05-05 01:02:57 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:00.198865 | orchestrator | 2025-05-05 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:00.198987 | orchestrator | 2025-05-05 01:03:00 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:00.203774 | orchestrator | 2025-05-05 01:03:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:00.204355 | orchestrator | 2025-05-05 01:03:00 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:00.205085 | orchestrator | 2025-05-05 01:03:00 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:00.205765 | orchestrator | 2025-05-05 01:03:00 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:03.234259 | orchestrator | 2025-05-05 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:03.234431 | orchestrator | 2025-05-05 01:03:03 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:03.234854 | orchestrator | 2025-05-05 01:03:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:03.234893 | orchestrator | 2025-05-05 01:03:03 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:03.235404 | orchestrator | 2025-05-05 01:03:03 | INFO  | Task 3a1a08e6-63ec-4703-96f8-2feca881b9d2 is in state STARTED 2025-05-05 01:03:03.235965 | orchestrator | 2025-05-05 01:03:03 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:03.236591 | orchestrator | 2025-05-05 01:03:03 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:06.277615 | orchestrator | 2025-05-05 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:06.277742 | orchestrator | 2025-05-05 01:03:06 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:06.278092 | orchestrator | 2025-05-05 01:03:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:06.278509 | orchestrator | 2025-05-05 01:03:06 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:06.278854 | orchestrator | 2025-05-05 01:03:06 | INFO  | Task 3a1a08e6-63ec-4703-96f8-2feca881b9d2 is in state STARTED 2025-05-05 01:03:06.279392 | orchestrator | 2025-05-05 01:03:06 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:06.279947 | orchestrator | 2025-05-05 01:03:06 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:09.316012 | orchestrator | 2025-05-05 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:09.316125 | orchestrator | 2025-05-05 01:03:09 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:09.317540 | orchestrator | 2025-05-05 01:03:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:09.317610 | orchestrator | 2025-05-05 01:03:09 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:09.318768 | orchestrator | 2025-05-05 01:03:09 | INFO  | Task 3a1a08e6-63ec-4703-96f8-2feca881b9d2 is in state STARTED 2025-05-05 01:03:09.322269 | orchestrator | 2025-05-05 01:03:09 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:09.323241 | orchestrator | 2025-05-05 01:03:09 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:12.362941 | orchestrator | 2025-05-05 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:12.363064 | orchestrator | 2025-05-05 01:03:12 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:12.363525 | orchestrator | 2025-05-05 01:03:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:12.364213 | orchestrator | 2025-05-05 01:03:12 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:12.364573 | orchestrator | 2025-05-05 01:03:12 | INFO  | Task 3a1a08e6-63ec-4703-96f8-2feca881b9d2 is in state SUCCESS 2025-05-05 01:03:12.365714 | orchestrator | 2025-05-05 01:03:12 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:12.367035 | orchestrator | 2025-05-05 01:03:12 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:12.367341 | orchestrator | 2025-05-05 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:15.405222 | orchestrator | 2025-05-05 01:03:15 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:15.406765 | orchestrator | 2025-05-05 01:03:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:15.408694 | orchestrator | 2025-05-05 01:03:15 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:15.409248 | orchestrator | 2025-05-05 01:03:15 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:15.414814 | orchestrator | 2025-05-05 01:03:15 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:18.460333 | orchestrator | 2025-05-05 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:18.460583 | orchestrator | 2025-05-05 01:03:18 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:18.461169 | orchestrator | 2025-05-05 01:03:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:18.461199 | orchestrator | 2025-05-05 01:03:18 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:18.461223 | orchestrator | 2025-05-05 01:03:18 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:18.461652 | orchestrator | 2025-05-05 01:03:18 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:21.503531 | orchestrator | 2025-05-05 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:21.503637 | orchestrator | 2025-05-05 01:03:21 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:21.504728 | orchestrator | 2025-05-05 01:03:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:21.506446 | orchestrator | 2025-05-05 01:03:21 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:21.506985 | orchestrator | 2025-05-05 01:03:21 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:21.507815 | orchestrator | 2025-05-05 01:03:21 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:24.547988 | orchestrator | 2025-05-05 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:24.548108 | orchestrator | 2025-05-05 01:03:24 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:24.549900 | orchestrator | 2025-05-05 01:03:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:24.551820 | orchestrator | 2025-05-05 01:03:24 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:24.552815 | orchestrator | 2025-05-05 01:03:24 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:24.554587 | orchestrator | 2025-05-05 01:03:24 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:27.606354 | orchestrator | 2025-05-05 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:27.606529 | orchestrator | 2025-05-05 01:03:27 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:27.607559 | orchestrator | 2025-05-05 01:03:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:27.608841 | orchestrator | 2025-05-05 01:03:27 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:27.610353 | orchestrator | 2025-05-05 01:03:27 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:27.612474 | orchestrator | 2025-05-05 01:03:27 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:27.612735 | orchestrator | 2025-05-05 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:30.662575 | orchestrator | 2025-05-05 01:03:30 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:30.664850 | orchestrator | 2025-05-05 01:03:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:30.666411 | orchestrator | 2025-05-05 01:03:30 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:30.668825 | orchestrator | 2025-05-05 01:03:30 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:30.669651 | orchestrator | 2025-05-05 01:03:30 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:33.725057 | orchestrator | 2025-05-05 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:33.725196 | orchestrator | 2025-05-05 01:03:33 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:33.726671 | orchestrator | 2025-05-05 01:03:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:33.727537 | orchestrator | 2025-05-05 01:03:33 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:33.728847 | orchestrator | 2025-05-05 01:03:33 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:33.731547 | orchestrator | 2025-05-05 01:03:33 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:36.787843 | orchestrator | 2025-05-05 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:36.787993 | orchestrator | 2025-05-05 01:03:36 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:36.789675 | orchestrator | 2025-05-05 01:03:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:36.790930 | orchestrator | 2025-05-05 01:03:36 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:36.792489 | orchestrator | 2025-05-05 01:03:36 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:36.794060 | orchestrator | 2025-05-05 01:03:36 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:39.851199 | orchestrator | 2025-05-05 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:39.851342 | orchestrator | 2025-05-05 01:03:39 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:39.853400 | orchestrator | 2025-05-05 01:03:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:39.855674 | orchestrator | 2025-05-05 01:03:39 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:39.857551 | orchestrator | 2025-05-05 01:03:39 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:39.860011 | orchestrator | 2025-05-05 01:03:39 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:42.923190 | orchestrator | 2025-05-05 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:42.923332 | orchestrator | 2025-05-05 01:03:42 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:42.926834 | orchestrator | 2025-05-05 01:03:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:42.929920 | orchestrator | 2025-05-05 01:03:42 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:42.931846 | orchestrator | 2025-05-05 01:03:42 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:42.933986 | orchestrator | 2025-05-05 01:03:42 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:42.934457 | orchestrator | 2025-05-05 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:45.974144 | orchestrator | 2025-05-05 01:03:45 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:45.975541 | orchestrator | 2025-05-05 01:03:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:45.976150 | orchestrator | 2025-05-05 01:03:45 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:45.977226 | orchestrator | 2025-05-05 01:03:45 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:45.978446 | orchestrator | 2025-05-05 01:03:45 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:45.980063 | orchestrator | 2025-05-05 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:49.009575 | orchestrator | 2025-05-05 01:03:49 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state STARTED 2025-05-05 01:03:49.009831 | orchestrator | 2025-05-05 01:03:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:49.010626 | orchestrator | 2025-05-05 01:03:49 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:49.013569 | orchestrator | 2025-05-05 01:03:49 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:49.014280 | orchestrator | 2025-05-05 01:03:49 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:52.064351 | orchestrator | 2025-05-05 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:52.064810 | orchestrator | 2025-05-05 01:03:52.064836 | orchestrator | 2025-05-05 01:03:52.064852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:03:52.064867 | orchestrator | 2025-05-05 01:03:52.064894 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:03:52.064909 | orchestrator | Monday 05 May 2025 01:02:31 +0000 (0:00:00.212) 0:00:00.212 ************ 2025-05-05 01:03:52.064924 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:03:52.064939 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:03:52.064953 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:03:52.064967 | orchestrator | 2025-05-05 01:03:52.064981 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:03:52.064995 | orchestrator | Monday 05 May 2025 01:02:31 +0000 (0:00:00.435) 0:00:00.647 ************ 2025-05-05 01:03:52.065009 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-05 01:03:52.065023 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-05 01:03:52.065037 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-05 01:03:52.065051 | orchestrator | 2025-05-05 01:03:52.065065 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-05 01:03:52.065079 | orchestrator | 2025-05-05 01:03:52.065093 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-05 01:03:52.065107 | orchestrator | Monday 05 May 2025 01:02:32 +0000 (0:00:00.728) 0:00:01.375 ************ 2025-05-05 01:03:52.065121 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:03:52.065135 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:03:52.065150 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:03:52.065165 | orchestrator | 2025-05-05 01:03:52.065178 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:03:52.065193 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:03:52.065208 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:03:52.065223 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:03:52.065236 | orchestrator | 2025-05-05 01:03:52.065250 | orchestrator | 2025-05-05 01:03:52.065264 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:03:52.065279 | orchestrator | Monday 05 May 2025 01:02:33 +0000 (0:00:01.018) 0:00:02.394 ************ 2025-05-05 01:03:52.065293 | orchestrator | =============================================================================== 2025-05-05 01:03:52.065307 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.02s 2025-05-05 01:03:52.065323 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-05-05 01:03:52.065339 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-05-05 01:03:52.065355 | orchestrator | 2025-05-05 01:03:52.065587 | orchestrator | None 2025-05-05 01:03:52.065604 | orchestrator | 2025-05-05 01:03:52.065620 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:03:52.065636 | orchestrator | 2025-05-05 01:03:52.065652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:03:52.065668 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.336) 0:00:00.336 ************ 2025-05-05 01:03:52.065682 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:03:52.065697 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:03:52.065711 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:03:52.065724 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:03:52.065738 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:03:52.065752 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:03:52.065765 | orchestrator | 2025-05-05 01:03:52.065779 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:03:52.065814 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.707) 0:00:01.044 ************ 2025-05-05 01:03:52.065829 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-05 01:03:52.065843 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-05 01:03:52.065858 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-05 01:03:52.065871 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-05 01:03:52.065886 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-05 01:03:52.065900 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-05 01:03:52.065914 | orchestrator | 2025-05-05 01:03:52.065928 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-05 01:03:52.065942 | orchestrator | 2025-05-05 01:03:52.065956 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-05 01:03:52.065971 | orchestrator | Monday 05 May 2025 00:59:17 +0000 (0:00:00.710) 0:00:01.754 ************ 2025-05-05 01:03:52.065985 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:03:52.066000 | orchestrator | 2025-05-05 01:03:52.066059 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-05 01:03:52.066078 | orchestrator | Monday 05 May 2025 00:59:18 +0000 (0:00:00.970) 0:00:02.725 ************ 2025-05-05 01:03:52.066093 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:03:52.066107 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:03:52.066121 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:03:52.066135 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:03:52.066149 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:03:52.066162 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:03:52.066176 | orchestrator | 2025-05-05 01:03:52.066191 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-05 01:03:52.066205 | orchestrator | Monday 05 May 2025 00:59:19 +0000 (0:00:01.152) 0:00:03.877 ************ 2025-05-05 01:03:52.066219 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:03:52.066233 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:03:52.066247 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:03:52.066261 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:03:52.066275 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:03:52.066299 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:03:52.066314 | orchestrator | 2025-05-05 01:03:52.066329 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-05 01:03:52.066343 | orchestrator | Monday 05 May 2025 00:59:20 +0000 (0:00:01.088) 0:00:04.966 ************ 2025-05-05 01:03:52.066357 | orchestrator | ok: [testbed-node-0] => { 2025-05-05 01:03:52.066408 | orchestrator |  "changed": false, 2025-05-05 01:03:52.066423 | orchestrator |  "msg": "All assertions passed" 2025-05-05 01:03:52.066437 | orchestrator | } 2025-05-05 01:03:52.066451 | orchestrator | ok: [testbed-node-1] => { 2025-05-05 01:03:52.066465 | orchestrator |  "changed": false, 2025-05-05 01:03:52.066479 | orchestrator |  "msg": "All assertions passed" 2025-05-05 01:03:52.066493 | orchestrator | } 2025-05-05 01:03:52.066507 | orchestrator | ok: [testbed-node-2] => { 2025-05-05 01:03:52.066521 | orchestrator |  "changed": false, 2025-05-05 01:03:52.066535 | orchestrator |  "msg": "All assertions passed" 2025-05-05 01:03:52.066548 | orchestrator | } 2025-05-05 01:03:52.066562 | orchestrator | ok: [testbed-node-3] => { 2025-05-05 01:03:52.066576 | orchestrator |  "changed": false, 2025-05-05 01:03:52.066590 | orchestrator |  "msg": "All assertions passed" 2025-05-05 01:03:52.066604 | orchestrator | } 2025-05-05 01:03:52.066618 | orchestrator | ok: [testbed-node-4] => { 2025-05-05 01:03:52.066632 | orchestrator |  "changed": false, 2025-05-05 01:03:52.066646 | orchestrator |  "msg": "All assertions passed" 2025-05-05 01:03:52.066660 | orchestrator | } 2025-05-05 01:03:52.066674 | orchestrator | ok: [testbed-node-5] => { 2025-05-05 01:03:52.066696 | orchestrator |  "changed": false, 2025-05-05 01:03:52.066710 | orchestrator |  "msg": "All assertions passed" 2025-05-05 01:03:52.066724 | orchestrator | } 2025-05-05 01:03:52.066738 | orchestrator | 2025-05-05 01:03:52.066752 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-05 01:03:52.066767 | orchestrator | Monday 05 May 2025 00:59:21 +0000 (0:00:00.564) 0:00:05.530 ************ 2025-05-05 01:03:52.066781 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.066794 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.066808 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.066822 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.066836 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.066849 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.066863 | orchestrator | 2025-05-05 01:03:52.066877 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-05 01:03:52.066891 | orchestrator | Monday 05 May 2025 00:59:22 +0000 (0:00:00.660) 0:00:06.191 ************ 2025-05-05 01:03:52.066905 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-05 01:03:52.066926 | orchestrator | 2025-05-05 01:03:52.066940 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-05 01:03:52.066954 | orchestrator | Monday 05 May 2025 00:59:25 +0000 (0:00:03.129) 0:00:09.320 ************ 2025-05-05 01:03:52.066969 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-05 01:03:52.066983 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-05 01:03:52.066998 | orchestrator | 2025-05-05 01:03:52.067012 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-05 01:03:52.067026 | orchestrator | Monday 05 May 2025 00:59:31 +0000 (0:00:06.340) 0:00:15.661 ************ 2025-05-05 01:03:52.067039 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:03:52.067054 | orchestrator | 2025-05-05 01:03:52.067068 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-05 01:03:52.067082 | orchestrator | Monday 05 May 2025 00:59:34 +0000 (0:00:03.354) 0:00:19.015 ************ 2025-05-05 01:03:52.067096 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:03:52.067110 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-05 01:03:52.067124 | orchestrator | 2025-05-05 01:03:52.067138 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-05 01:03:52.067152 | orchestrator | Monday 05 May 2025 00:59:38 +0000 (0:00:03.889) 0:00:22.905 ************ 2025-05-05 01:03:52.067166 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:03:52.067180 | orchestrator | 2025-05-05 01:03:52.067194 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-05 01:03:52.067208 | orchestrator | Monday 05 May 2025 00:59:42 +0000 (0:00:03.388) 0:00:26.293 ************ 2025-05-05 01:03:52.067222 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-05 01:03:52.067236 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-05 01:03:52.067250 | orchestrator | 2025-05-05 01:03:52.067264 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-05 01:03:52.067278 | orchestrator | Monday 05 May 2025 00:59:50 +0000 (0:00:08.613) 0:00:34.907 ************ 2025-05-05 01:03:52.067292 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.067306 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.067320 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.067334 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.067347 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.067408 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.067426 | orchestrator | 2025-05-05 01:03:52.067440 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-05 01:03:52.067454 | orchestrator | Monday 05 May 2025 00:59:51 +0000 (0:00:00.745) 0:00:35.652 ************ 2025-05-05 01:03:52.067476 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.067491 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.067505 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.067519 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.067533 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.067547 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.067574 | orchestrator | 2025-05-05 01:03:52.067589 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-05 01:03:52.067613 | orchestrator | Monday 05 May 2025 00:59:55 +0000 (0:00:03.834) 0:00:39.487 ************ 2025-05-05 01:03:52.067626 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:03:52.067639 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:03:52.067652 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:03:52.067664 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:03:52.067677 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:03:52.067697 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:03:52.067710 | orchestrator | 2025-05-05 01:03:52.067723 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-05 01:03:52.067735 | orchestrator | Monday 05 May 2025 00:59:56 +0000 (0:00:01.100) 0:00:40.587 ************ 2025-05-05 01:03:52.067748 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.067760 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.067773 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.067785 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.067798 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.067810 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.067823 | orchestrator | 2025-05-05 01:03:52.067835 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-05 01:03:52.067848 | orchestrator | Monday 05 May 2025 00:59:59 +0000 (0:00:02.586) 0:00:43.174 ************ 2025-05-05 01:03:52.067863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.067880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.067895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.067942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.067964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.067978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.067992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.068061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.068088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.068126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.068175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.068203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.068267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.068357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.068405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.068462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.068476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval'2025-05-05 01:03:52 | INFO  | Task fb62c5b0-0969-4cbc-85b2-d72e1b6139f7 is in state SUCCESS 2025-05-05 01:03:52.068513 | orchestrator | : '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.068540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.068604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.068687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.068761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.068833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.068874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.068887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.068920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.068934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.068976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.068989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.069043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.069081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.069094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.069136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.069210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.069224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.069256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.069282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.069296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.069735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.069773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.069841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.069854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.069874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.069917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.069945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.069963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.069992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.070006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.070047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.070342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.070419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.070457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.070481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.070495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.070508 | orchestrator | 2025-05-05 01:03:52.070870 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-05 01:03:52.070887 | orchestrator | Monday 05 May 2025 01:00:01 +0000 (0:00:02.774) 0:00:45.948 ************ 2025-05-05 01:03:52.070898 | orchestrator | [WARNING]: Skipped 2025-05-05 01:03:52.070908 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-05 01:03:52.070919 | orchestrator | due to this access issue: 2025-05-05 01:03:52.070930 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-05 01:03:52.070940 | orchestrator | a directory 2025-05-05 01:03:52.070951 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 01:03:52.070961 | orchestrator | 2025-05-05 01:03:52.070972 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-05 01:03:52.070982 | orchestrator | Monday 05 May 2025 01:00:02 +0000 (0:00:00.569) 0:00:46.517 ************ 2025-05-05 01:03:52.070992 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:03:52.071003 | orchestrator | 2025-05-05 01:03:52.071013 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-05 01:03:52.071052 | orchestrator | Monday 05 May 2025 01:00:03 +0000 (0:00:01.309) 0:00:47.826 ************ 2025-05-05 01:03:52.071065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.071103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.071132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.071145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.071198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.071500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.071520 | orchestrator | 2025-05-05 01:03:52.071532 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-05 01:03:52.071543 | orchestrator | Monday 05 May 2025 01:00:07 +0000 (0:00:04.117) 0:00:51.944 ************ 2025-05-05 01:03:52.071586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.071600 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.071612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.071623 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.071639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.071650 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.071661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.071677 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.071689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.071700 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.071739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.071752 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.071763 | orchestrator | 2025-05-05 01:03:52.071774 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-05 01:03:52.071789 | orchestrator | Monday 05 May 2025 01:00:11 +0000 (0:00:03.246) 0:00:55.190 ************ 2025-05-05 01:03:52.071800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.071812 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.071823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.071839 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.071871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.071884 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.072123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.072147 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.072510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.072530 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.072541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.072552 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.072592 | orchestrator | 2025-05-05 01:03:52.072604 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-05 01:03:52.072614 | orchestrator | Monday 05 May 2025 01:00:14 +0000 (0:00:03.751) 0:00:58.942 ************ 2025-05-05 01:03:52.072625 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.072635 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.072695 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.072706 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.072716 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.072782 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.072795 | orchestrator | 2025-05-05 01:03:52.072806 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-05 01:03:52.072817 | orchestrator | Monday 05 May 2025 01:00:19 +0000 (0:00:05.159) 0:01:04.101 ************ 2025-05-05 01:03:52.072828 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.072862 | orchestrator | 2025-05-05 01:03:52.073086 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-05 01:03:52.073101 | orchestrator | Monday 05 May 2025 01:00:20 +0000 (0:00:00.240) 0:01:04.341 ************ 2025-05-05 01:03:52.073112 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.073123 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.073134 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.073146 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.073157 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.073168 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.073178 | orchestrator | 2025-05-05 01:03:52.073189 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-05 01:03:52.073200 | orchestrator | Monday 05 May 2025 01:00:21 +0000 (0:00:00.811) 0:01:05.153 ************ 2025-05-05 01:03:52.073212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.073280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.073311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.073324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.073343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.073355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.073410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.073860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.073895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.073907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.073926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.073936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.073945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.074001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.074063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.074073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074082 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.074092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.074145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.074202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.074263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.074322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.074351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.074373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.074481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.074490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.074508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.074571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.074795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.074831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.074849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.074947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.074957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.074967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.074976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.075036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075054 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.075064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.075073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075082 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.075091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.075100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.075222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.075240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.075286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.075709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.075719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.075816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.075839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.075848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.075857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.075979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.075994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.076016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.076026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.076067 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.076120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.076142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.076152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.076184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.076236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076519 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.076530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.076540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.076648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.076668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.076677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.076903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.076928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.076937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.076947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.076974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.077149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077203 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.077215 | orchestrator | 2025-05-05 01:03:52.077223 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-05 01:03:52.077232 | orchestrator | Monday 05 May 2025 01:00:25 +0000 (0:00:04.465) 0:01:09.619 ************ 2025-05-05 01:03:52.077241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.077249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.077339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.077357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.077378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.077486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.077583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.077601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.077650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.077671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.077693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.077701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.077771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.077781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.077809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.077819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.078140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.078157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.078176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.078244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.078268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.078287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.078372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.078469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.078877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.078897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.078921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.078930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.078950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.079018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.079042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.079051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.079121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.079133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.079157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.079247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.079610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.079629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.079639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.079702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.080746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.080824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.080847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.080872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.080887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.080934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.080956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.080964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.080979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.080987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.081046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.081076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.081083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.081143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.081158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.081226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.081237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081244 | orchestrator | 2025-05-05 01:03:52.081252 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-05 01:03:52.081272 | orchestrator | Monday 05 May 2025 01:00:30 +0000 (0:00:05.143) 0:01:14.763 ************ 2025-05-05 01:03:52.081279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.081296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.081381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.081481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.081521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.081602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.081692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.081774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.081869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.081923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.081974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.081985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.081994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.082010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.082050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.082104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.082149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.082235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.082259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.082328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.082340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.082356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.082421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.082466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.082474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.082533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.082601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.082618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.082652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.082748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.082755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.082816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.082824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.082908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.082919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082932 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.082940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.082947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.082954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.082962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.083025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.083033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083040 | orchestrator | 2025-05-05 01:03:52.083048 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-05 01:03:52.083055 | orchestrator | Monday 05 May 2025 01:00:37 +0000 (0:00:07.247) 0:01:22.010 ************ 2025-05-05 01:03:52.083062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.083078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.083159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.083247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.083263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.083344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.083353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083370 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.083379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.083387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.083468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.083484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.083642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.083661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.083669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.083770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.083786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.083855 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.083865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.083880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.083887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.083955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.083967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.083975 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.083983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.083991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.084064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.084081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.084201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.084268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.084304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.084422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.084430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.084454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.084500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.084554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.084562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.084617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.084694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.084744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.084797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.084807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.084831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.084838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084844 | orchestrator | 2025-05-05 01:03:52.084851 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-05 01:03:52.084857 | orchestrator | Monday 05 May 2025 01:00:41 +0000 (0:00:03.176) 0:01:25.186 ************ 2025-05-05 01:03:52.084863 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.084870 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:03:52.084876 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.084883 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:03:52.084889 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.084895 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:03:52.084901 | orchestrator | 2025-05-05 01:03:52.084907 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-05 01:03:52.084914 | orchestrator | Monday 05 May 2025 01:00:45 +0000 (0:00:04.815) 0:01:30.002 ************ 2025-05-05 01:03:52.084953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.084963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.084980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.084999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.085058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.085156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.085215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.085298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085343 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.085350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.085356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.085432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085454 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.085460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.085499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.085539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.085637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.085701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085715 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.085721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.085764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.085797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.085897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.085904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.085970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.085978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.085985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.085993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.086266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.086290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.086314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.086486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.086507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.086517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.086620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.086631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.086651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.086758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.086778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.086792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.086872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.086893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.086903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.086968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.086980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.086990 | orchestrator | 2025-05-05 01:03:52.087000 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-05 01:03:52.087010 | orchestrator | Monday 05 May 2025 01:00:49 +0000 (0:00:03.692) 0:01:33.694 ************ 2025-05-05 01:03:52.087019 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.087029 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.087038 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.087047 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.087057 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.087066 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.087075 | orchestrator | 2025-05-05 01:03:52.087085 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-05 01:03:52.087094 | orchestrator | Monday 05 May 2025 01:00:52 +0000 (0:00:03.422) 0:01:37.117 ************ 2025-05-05 01:03:52.087104 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.087113 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.087122 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.087131 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.087144 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.087153 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.087170 | orchestrator | 2025-05-05 01:03:52.087179 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-05 01:03:52.087188 | orchestrator | Monday 05 May 2025 01:00:54 +0000 (0:00:01.965) 0:01:39.083 ************ 2025-05-05 01:03:52.087198 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.087207 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.087216 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.087225 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.087235 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.087244 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.087253 | orchestrator | 2025-05-05 01:03:52.087262 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-05 01:03:52.087271 | orchestrator | Monday 05 May 2025 01:00:57 +0000 (0:00:02.385) 0:01:41.468 ************ 2025-05-05 01:03:52.087280 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.087289 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.087298 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.087307 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.087316 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.087326 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.087335 | orchestrator | 2025-05-05 01:03:52.087344 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-05 01:03:52.087353 | orchestrator | Monday 05 May 2025 01:00:59 +0000 (0:00:02.563) 0:01:44.032 ************ 2025-05-05 01:03:52.087380 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.087390 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.087399 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.087409 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.087418 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.087427 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.087437 | orchestrator | 2025-05-05 01:03:52.087446 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-05 01:03:52.087456 | orchestrator | Monday 05 May 2025 01:01:01 +0000 (0:00:01.740) 0:01:45.772 ************ 2025-05-05 01:03:52.087465 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.087474 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.087483 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.087493 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.087502 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.087511 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.087519 | orchestrator | 2025-05-05 01:03:52.087528 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-05 01:03:52.087541 | orchestrator | Monday 05 May 2025 01:01:03 +0000 (0:00:01.724) 0:01:47.497 ************ 2025-05-05 01:03:52.087550 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-05 01:03:52.087559 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.087568 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-05 01:03:52.087577 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.087586 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-05 01:03:52.087598 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.087607 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-05 01:03:52.087618 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.087628 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-05 01:03:52.087637 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.087647 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-05 01:03:52.087725 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.087738 | orchestrator | 2025-05-05 01:03:52.087747 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-05 01:03:52.087766 | orchestrator | Monday 05 May 2025 01:01:05 +0000 (0:00:02.585) 0:01:50.082 ************ 2025-05-05 01:03:52.087788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.087798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.087807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.087817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.087875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.087902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.087913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.087922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.087932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.087942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.087952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.088023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.088058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.088066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088080 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.088132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.088151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.088232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.088287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.088349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.088404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.088412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.088476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088502 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.088510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.088530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.088614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.088635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.088703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.088712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088725 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.088734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.088783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.088824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.088910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.088929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.088937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.088992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.089004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.089013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089025 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.089033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.089049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.089161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.089265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.089287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.089393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.089404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089421 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.089429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.089445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.089523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.089587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.089601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.089652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.089660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089669 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.089674 | orchestrator | 2025-05-05 01:03:52.089679 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-05 01:03:52.089685 | orchestrator | Monday 05 May 2025 01:01:07 +0000 (0:00:02.023) 0:01:52.106 ************ 2025-05-05 01:03:52.089690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.089695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.089753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.089815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.089828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.089877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.089885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089893 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.089898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.089903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.089961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.089982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.089988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.090049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.090055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.090107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.090171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.090184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.090247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.090327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090332 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.090337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.090412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090440 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.090446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.090489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.090512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090525 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.090555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.090563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.090616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.090689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.090712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090725 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.090731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.090747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.090777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.090833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.090848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.090865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.090873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.090878 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.090883 | orchestrator | 2025-05-05 01:03:52.090888 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-05 01:03:52.090893 | orchestrator | Monday 05 May 2025 01:01:10 +0000 (0:00:02.266) 0:01:54.373 ************ 2025-05-05 01:03:52.090898 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.090903 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.090908 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.090913 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.090918 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.090923 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.090928 | orchestrator | 2025-05-05 01:03:52.090933 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-05 01:03:52.090938 | orchestrator | Monday 05 May 2025 01:01:13 +0000 (0:00:02.779) 0:01:57.152 ************ 2025-05-05 01:03:52.090943 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.090948 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.090956 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.090962 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:03:52.090967 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:03:52.090972 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:03:52.090976 | orchestrator | 2025-05-05 01:03:52.090981 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-05 01:03:52.090986 | orchestrator | Monday 05 May 2025 01:01:18 +0000 (0:00:05.274) 0:02:02.427 ************ 2025-05-05 01:03:52.090991 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.090996 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091001 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091006 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091011 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091016 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091021 | orchestrator | 2025-05-05 01:03:52.091026 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-05 01:03:52.091030 | orchestrator | Monday 05 May 2025 01:01:21 +0000 (0:00:02.872) 0:02:05.299 ************ 2025-05-05 01:03:52.091035 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091040 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091045 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091060 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091066 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091071 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091076 | orchestrator | 2025-05-05 01:03:52.091081 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-05 01:03:52.091086 | orchestrator | Monday 05 May 2025 01:01:23 +0000 (0:00:02.126) 0:02:07.425 ************ 2025-05-05 01:03:52.091091 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091095 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091100 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091105 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091110 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091115 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091120 | orchestrator | 2025-05-05 01:03:52.091125 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-05 01:03:52.091134 | orchestrator | Monday 05 May 2025 01:01:25 +0000 (0:00:02.037) 0:02:09.463 ************ 2025-05-05 01:03:52.091139 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091144 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091149 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091154 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091159 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091164 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091169 | orchestrator | 2025-05-05 01:03:52.091174 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-05 01:03:52.091179 | orchestrator | Monday 05 May 2025 01:01:28 +0000 (0:00:02.999) 0:02:12.462 ************ 2025-05-05 01:03:52.091183 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091189 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091198 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091203 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091208 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091213 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091218 | orchestrator | 2025-05-05 01:03:52.091223 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-05 01:03:52.091228 | orchestrator | Monday 05 May 2025 01:01:30 +0000 (0:00:02.321) 0:02:14.784 ************ 2025-05-05 01:03:52.091233 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091237 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091242 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091247 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091252 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091257 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091261 | orchestrator | 2025-05-05 01:03:52.091267 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-05 01:03:52.091272 | orchestrator | Monday 05 May 2025 01:01:33 +0000 (0:00:03.179) 0:02:17.964 ************ 2025-05-05 01:03:52.091276 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091281 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091286 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091291 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091296 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091301 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091305 | orchestrator | 2025-05-05 01:03:52.091310 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-05 01:03:52.091315 | orchestrator | Monday 05 May 2025 01:01:35 +0000 (0:00:01.952) 0:02:19.916 ************ 2025-05-05 01:03:52.091320 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091325 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091330 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091335 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091343 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091349 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091355 | orchestrator | 2025-05-05 01:03:52.091387 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-05 01:03:52.091394 | orchestrator | Monday 05 May 2025 01:01:37 +0000 (0:00:01.906) 0:02:21.823 ************ 2025-05-05 01:03:52.091400 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-05 01:03:52.091407 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.091412 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-05 01:03:52.091418 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091424 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-05 01:03:52.091429 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.091435 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-05 01:03:52.091444 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.091449 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-05 01:03:52.091455 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091461 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-05 01:03:52.091466 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.091472 | orchestrator | 2025-05-05 01:03:52.091477 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-05 01:03:52.091483 | orchestrator | Monday 05 May 2025 01:01:40 +0000 (0:00:02.729) 0:02:24.553 ************ 2025-05-05 01:03:52.091501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.091508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.091535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.091570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.091589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.091665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.091684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.091702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.091724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.091729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091742 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.091840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.091847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.091876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.091886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091894 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.091902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.091928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.091959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.091990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.091995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.092014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.092041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092053 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.092059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.092064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.092147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.092174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092187 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.092192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.092198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.092283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.092309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092323 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.092329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.092334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.092443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.092459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092483 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.092488 | orchestrator | 2025-05-05 01:03:52.092493 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-05 01:03:52.092498 | orchestrator | Monday 05 May 2025 01:01:42 +0000 (0:00:02.078) 0:02:26.631 ************ 2025-05-05 01:03:52.092503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.092509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.092563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.092639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.092703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-05 01:03:52.092708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.092739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.092817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.092890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.092910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-05 01:03:52.092929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-05 01:03:52.092956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.092974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.092985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.092993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.092999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.093004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.093017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.093023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.093032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.093037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.093056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.093069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.093075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.093080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.093099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.093107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.093113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.093118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-05 01:03:52.093140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:03:52.093151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:03:52.093156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-05 01:03:52.093169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-05 01:03:52.093177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-05 01:03:52.093182 | orchestrator | 2025-05-05 01:03:52.093188 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-05 01:03:52.093193 | orchestrator | Monday 05 May 2025 01:01:45 +0000 (0:00:03.052) 0:02:29.684 ************ 2025-05-05 01:03:52.093198 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:03:52.093203 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:03:52.093208 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:03:52.093214 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:03:52.093219 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:03:52.093224 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:03:52.093229 | orchestrator | 2025-05-05 01:03:52.093234 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-05 01:03:52.093240 | orchestrator | Monday 05 May 2025 01:01:46 +0000 (0:00:00.910) 0:02:30.594 ************ 2025-05-05 01:03:52.093245 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:03:52.093250 | orchestrator | 2025-05-05 01:03:52.093255 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-05 01:03:52.093260 | orchestrator | Monday 05 May 2025 01:01:48 +0000 (0:00:02.273) 0:02:32.868 ************ 2025-05-05 01:03:52.093265 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:03:52.093270 | orchestrator | 2025-05-05 01:03:52.093275 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-05 01:03:52.093281 | orchestrator | Monday 05 May 2025 01:01:50 +0000 (0:00:02.093) 0:02:34.962 ************ 2025-05-05 01:03:52.093286 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:03:52.093291 | orchestrator | 2025-05-05 01:03:52.093296 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-05 01:03:52.093301 | orchestrator | Monday 05 May 2025 01:02:29 +0000 (0:00:39.053) 0:03:14.015 ************ 2025-05-05 01:03:52.093306 | orchestrator | 2025-05-05 01:03:52.093311 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-05 01:03:52.093316 | orchestrator | Monday 05 May 2025 01:02:29 +0000 (0:00:00.060) 0:03:14.076 ************ 2025-05-05 01:03:52.093322 | orchestrator | 2025-05-05 01:03:52.093327 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-05 01:03:52.093332 | orchestrator | Monday 05 May 2025 01:02:30 +0000 (0:00:00.321) 0:03:14.397 ************ 2025-05-05 01:03:52.093337 | orchestrator | 2025-05-05 01:03:52.093342 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-05 01:03:52.093347 | orchestrator | Monday 05 May 2025 01:02:30 +0000 (0:00:00.058) 0:03:14.456 ************ 2025-05-05 01:03:52.093353 | orchestrator | 2025-05-05 01:03:52.093367 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-05 01:03:52.093373 | orchestrator | Monday 05 May 2025 01:02:30 +0000 (0:00:00.055) 0:03:14.511 ************ 2025-05-05 01:03:52.093378 | orchestrator | 2025-05-05 01:03:52.093383 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-05 01:03:52.093388 | orchestrator | Monday 05 May 2025 01:02:30 +0000 (0:00:00.057) 0:03:14.569 ************ 2025-05-05 01:03:52.093396 | orchestrator | 2025-05-05 01:03:52.093401 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-05 01:03:52.093407 | orchestrator | Monday 05 May 2025 01:02:30 +0000 (0:00:00.358) 0:03:14.927 ************ 2025-05-05 01:03:52.093412 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:03:52.093417 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:03:52.093422 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:03:52.093427 | orchestrator | 2025-05-05 01:03:52.093432 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-05 01:03:52.093440 | orchestrator | Monday 05 May 2025 01:02:57 +0000 (0:00:26.682) 0:03:41.610 ************ 2025-05-05 01:03:55.120800 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:03:55.120950 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:03:55.120979 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:03:55.121003 | orchestrator | 2025-05-05 01:03:55.121028 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:03:55.121053 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-05 01:03:55.121077 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-05 01:03:55.121100 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-05 01:03:55.121122 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-05 01:03:55.121144 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-05 01:03:55.121165 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-05 01:03:55.121187 | orchestrator | 2025-05-05 01:03:55.121209 | orchestrator | 2025-05-05 01:03:55.121230 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:03:55.121252 | orchestrator | Monday 05 May 2025 01:03:50 +0000 (0:00:53.456) 0:04:35.066 ************ 2025-05-05 01:03:55.121273 | orchestrator | =============================================================================== 2025-05-05 01:03:55.121295 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.46s 2025-05-05 01:03:55.121317 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.05s 2025-05-05 01:03:55.121339 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.68s 2025-05-05 01:03:55.121389 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.61s 2025-05-05 01:03:55.121413 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.25s 2025-05-05 01:03:55.121458 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.34s 2025-05-05 01:03:55.121481 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.27s 2025-05-05 01:03:55.121503 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 5.16s 2025-05-05 01:03:55.121525 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.14s 2025-05-05 01:03:55.121547 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.82s 2025-05-05 01:03:55.121569 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.47s 2025-05-05 01:03:55.121590 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.12s 2025-05-05 01:03:55.121612 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.89s 2025-05-05 01:03:55.121634 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.83s 2025-05-05 01:03:55.121688 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.75s 2025-05-05 01:03:55.121711 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.69s 2025-05-05 01:03:55.121740 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.42s 2025-05-05 01:03:55.121762 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.39s 2025-05-05 01:03:55.121784 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.35s 2025-05-05 01:03:55.121806 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.25s 2025-05-05 01:03:55.121830 | orchestrator | 2025-05-05 01:03:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:55.121852 | orchestrator | 2025-05-05 01:03:52 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:55.121874 | orchestrator | 2025-05-05 01:03:52 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:55.121897 | orchestrator | 2025-05-05 01:03:52 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:55.121920 | orchestrator | 2025-05-05 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:55.121964 | orchestrator | 2025-05-05 01:03:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:55.124210 | orchestrator | 2025-05-05 01:03:55 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:03:55.124769 | orchestrator | 2025-05-05 01:03:55 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:55.126219 | orchestrator | 2025-05-05 01:03:55 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:55.127790 | orchestrator | 2025-05-05 01:03:55 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:03:55.128028 | orchestrator | 2025-05-05 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:03:58.186892 | orchestrator | 2025-05-05 01:03:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:03:58.187156 | orchestrator | 2025-05-05 01:03:58 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:03:58.187279 | orchestrator | 2025-05-05 01:03:58 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:03:58.187707 | orchestrator | 2025-05-05 01:03:58 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:03:58.188198 | orchestrator | 2025-05-05 01:03:58 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:01.219817 | orchestrator | 2025-05-05 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:01.219979 | orchestrator | 2025-05-05 01:04:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:01.221035 | orchestrator | 2025-05-05 01:04:01 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:01.221080 | orchestrator | 2025-05-05 01:04:01 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:04:01.221567 | orchestrator | 2025-05-05 01:04:01 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:01.222159 | orchestrator | 2025-05-05 01:04:01 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:04.252683 | orchestrator | 2025-05-05 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:04.252802 | orchestrator | 2025-05-05 01:04:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:04.255668 | orchestrator | 2025-05-05 01:04:04 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:04.256634 | orchestrator | 2025-05-05 01:04:04 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state STARTED 2025-05-05 01:04:04.257588 | orchestrator | 2025-05-05 01:04:04 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:04.258496 | orchestrator | 2025-05-05 01:04:04 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:04.258775 | orchestrator | 2025-05-05 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:07.298843 | orchestrator | 2025-05-05 01:04:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:07.302963 | orchestrator | 2025-05-05 01:04:07.303465 | orchestrator | 2025-05-05 01:04:07.303494 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:04:07.303510 | orchestrator | 2025-05-05 01:04:07.303525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:04:07.303539 | orchestrator | Monday 05 May 2025 01:02:13 +0000 (0:00:00.356) 0:00:00.357 ************ 2025-05-05 01:04:07.303553 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:04:07.303568 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:04:07.303582 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:04:07.303596 | orchestrator | 2025-05-05 01:04:07.303610 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:04:07.303624 | orchestrator | Monday 05 May 2025 01:02:14 +0000 (0:00:00.482) 0:00:00.839 ************ 2025-05-05 01:04:07.303638 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-05 01:04:07.303653 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-05 01:04:07.303667 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-05 01:04:07.303681 | orchestrator | 2025-05-05 01:04:07.303710 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-05 01:04:07.303735 | orchestrator | 2025-05-05 01:04:07.303750 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-05 01:04:07.303764 | orchestrator | Monday 05 May 2025 01:02:14 +0000 (0:00:00.361) 0:00:01.200 ************ 2025-05-05 01:04:07.303778 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:04:07.303793 | orchestrator | 2025-05-05 01:04:07.303808 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-05 01:04:07.303822 | orchestrator | Monday 05 May 2025 01:02:15 +0000 (0:00:00.911) 0:00:02.112 ************ 2025-05-05 01:04:07.303836 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-05 01:04:07.303850 | orchestrator | 2025-05-05 01:04:07.303864 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-05 01:04:07.303878 | orchestrator | Monday 05 May 2025 01:02:18 +0000 (0:00:03.291) 0:00:05.404 ************ 2025-05-05 01:04:07.303891 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-05 01:04:07.303906 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-05 01:04:07.303920 | orchestrator | 2025-05-05 01:04:07.303934 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-05 01:04:07.303958 | orchestrator | Monday 05 May 2025 01:02:24 +0000 (0:00:06.300) 0:00:11.704 ************ 2025-05-05 01:04:07.303972 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:04:07.303987 | orchestrator | 2025-05-05 01:04:07.304000 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-05 01:04:07.304015 | orchestrator | Monday 05 May 2025 01:02:28 +0000 (0:00:03.262) 0:00:14.966 ************ 2025-05-05 01:04:07.304029 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:04:07.304063 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-05 01:04:07.304086 | orchestrator | 2025-05-05 01:04:07.304103 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-05 01:04:07.304120 | orchestrator | Monday 05 May 2025 01:02:32 +0000 (0:00:03.834) 0:00:18.801 ************ 2025-05-05 01:04:07.304136 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:04:07.304153 | orchestrator | 2025-05-05 01:04:07.304169 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-05 01:04:07.304185 | orchestrator | Monday 05 May 2025 01:02:35 +0000 (0:00:03.258) 0:00:22.059 ************ 2025-05-05 01:04:07.304201 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-05 01:04:07.304217 | orchestrator | 2025-05-05 01:04:07.304233 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-05 01:04:07.304249 | orchestrator | Monday 05 May 2025 01:02:39 +0000 (0:00:04.060) 0:00:26.120 ************ 2025-05-05 01:04:07.304265 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.304281 | orchestrator | 2025-05-05 01:04:07.304298 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-05 01:04:07.304314 | orchestrator | Monday 05 May 2025 01:02:43 +0000 (0:00:03.673) 0:00:29.794 ************ 2025-05-05 01:04:07.304330 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.304347 | orchestrator | 2025-05-05 01:04:07.304415 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-05 01:04:07.304443 | orchestrator | Monday 05 May 2025 01:02:47 +0000 (0:00:04.327) 0:00:34.122 ************ 2025-05-05 01:04:07.304459 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.304474 | orchestrator | 2025-05-05 01:04:07.304488 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-05 01:04:07.304502 | orchestrator | Monday 05 May 2025 01:02:50 +0000 (0:00:03.592) 0:00:37.714 ************ 2025-05-05 01:04:07.304531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.304550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.304565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.304589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.304604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.304633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.304649 | orchestrator | 2025-05-05 01:04:07.304664 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-05 01:04:07.304678 | orchestrator | Monday 05 May 2025 01:02:53 +0000 (0:00:02.160) 0:00:39.874 ************ 2025-05-05 01:04:07.304693 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.304708 | orchestrator | 2025-05-05 01:04:07.304722 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-05 01:04:07.304736 | orchestrator | Monday 05 May 2025 01:02:53 +0000 (0:00:00.114) 0:00:39.989 ************ 2025-05-05 01:04:07.304750 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.304764 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:04:07.304778 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:04:07.304792 | orchestrator | 2025-05-05 01:04:07.304807 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-05 01:04:07.304821 | orchestrator | Monday 05 May 2025 01:02:53 +0000 (0:00:00.356) 0:00:40.346 ************ 2025-05-05 01:04:07.304841 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 01:04:07.304855 | orchestrator | 2025-05-05 01:04:07.304869 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-05 01:04:07.304884 | orchestrator | Monday 05 May 2025 01:02:53 +0000 (0:00:00.422) 0:00:40.769 ************ 2025-05-05 01:04:07.304898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.304913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.304929 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.304943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.304965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.304981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.305009 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:04:07.305023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.305038 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:04:07.305052 | orchestrator | 2025-05-05 01:04:07.305066 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-05 01:04:07.305081 | orchestrator | Monday 05 May 2025 01:02:54 +0000 (0:00:00.677) 0:00:41.446 ************ 2025-05-05 01:04:07.305095 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.305109 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:04:07.305123 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:04:07.305136 | orchestrator | 2025-05-05 01:04:07.305150 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-05 01:04:07.305165 | orchestrator | Monday 05 May 2025 01:02:54 +0000 (0:00:00.312) 0:00:41.758 ************ 2025-05-05 01:04:07.305179 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:04:07.305193 | orchestrator | 2025-05-05 01:04:07.305207 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-05 01:04:07.305221 | orchestrator | Monday 05 May 2025 01:02:56 +0000 (0:00:01.179) 0:00:42.938 ************ 2025-05-05 01:04:07.305236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.305257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.305279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.305294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.305309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.305324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.305338 | orchestrator | 2025-05-05 01:04:07.305353 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-05 01:04:07.305423 | orchestrator | Monday 05 May 2025 01:02:58 +0000 (0:00:02.685) 0:00:45.624 ************ 2025-05-05 01:04:07.305447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.305470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.305485 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:04:07.305500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.305515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.305529 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.305544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.305571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.305586 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:04:07.305600 | orchestrator | 2025-05-05 01:04:07.305615 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-05 01:04:07.305629 | orchestrator | Monday 05 May 2025 01:02:59 +0000 (0:00:01.031) 0:00:46.655 ************ 2025-05-05 01:04:07.305644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.305658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.305673 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.305688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.305714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.305729 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:04:07.305749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.305765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.305779 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:04:07.305793 | orchestrator | 2025-05-05 01:04:07.305807 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-05 01:04:07.305822 | orchestrator | Monday 05 May 2025 01:03:01 +0000 (0:00:01.675) 0:00:48.331 ************ 2025-05-05 01:04:07.305837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.305853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.305886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.305928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.305943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.305957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.305970 | orchestrator | 2025-05-05 01:04:07.305982 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-05 01:04:07.306001 | orchestrator | Monday 05 May 2025 01:03:04 +0000 (0:00:03.347) 0:00:51.678 ************ 2025-05-05 01:04:07.306073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.306128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.306152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.306166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.306180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.306201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.306214 | orchestrator | 2025-05-05 01:04:07.306227 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-05 01:04:07.306245 | orchestrator | Monday 05 May 2025 01:03:17 +0000 (0:00:13.008) 0:01:04.687 ************ 2025-05-05 01:04:07.306268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.306283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.306296 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.306309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.306329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.306342 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:04:07.306395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-05 01:04:07.306412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:04:07.306426 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:04:07.306439 | orchestrator | 2025-05-05 01:04:07.306452 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-05 01:04:07.306465 | orchestrator | Monday 05 May 2025 01:03:19 +0000 (0:00:01.302) 0:01:05.989 ************ 2025-05-05 01:04:07.306478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.306492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.306525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.306554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-05 01:04:07.306569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.306582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:04:07.306595 | orchestrator | 2025-05-05 01:04:07.306608 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-05 01:04:07.306627 | orchestrator | Monday 05 May 2025 01:03:22 +0000 (0:00:03.595) 0:01:09.585 ************ 2025-05-05 01:04:07.306640 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:04:07.306653 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:04:07.306665 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:04:07.306678 | orchestrator | 2025-05-05 01:04:07.306690 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-05 01:04:07.306703 | orchestrator | Monday 05 May 2025 01:03:23 +0000 (0:00:00.582) 0:01:10.168 ************ 2025-05-05 01:04:07.306716 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.306728 | orchestrator | 2025-05-05 01:04:07.306741 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-05 01:04:07.306754 | orchestrator | Monday 05 May 2025 01:03:25 +0000 (0:00:02.476) 0:01:12.644 ************ 2025-05-05 01:04:07.306766 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.306779 | orchestrator | 2025-05-05 01:04:07.306791 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-05 01:04:07.306804 | orchestrator | Monday 05 May 2025 01:03:28 +0000 (0:00:02.338) 0:01:14.982 ************ 2025-05-05 01:04:07.306816 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.306829 | orchestrator | 2025-05-05 01:04:07.306841 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-05 01:04:07.306853 | orchestrator | Monday 05 May 2025 01:03:42 +0000 (0:00:14.620) 0:01:29.603 ************ 2025-05-05 01:04:07.306865 | orchestrator | 2025-05-05 01:04:07.306878 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-05 01:04:07.306891 | orchestrator | Monday 05 May 2025 01:03:42 +0000 (0:00:00.059) 0:01:29.663 ************ 2025-05-05 01:04:07.306903 | orchestrator | 2025-05-05 01:04:07.306915 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-05 01:04:07.306928 | orchestrator | Monday 05 May 2025 01:03:43 +0000 (0:00:00.178) 0:01:29.841 ************ 2025-05-05 01:04:07.306940 | orchestrator | 2025-05-05 01:04:07.306952 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-05 01:04:07.306965 | orchestrator | Monday 05 May 2025 01:03:43 +0000 (0:00:00.056) 0:01:29.898 ************ 2025-05-05 01:04:07.306978 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.306990 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:04:07.307003 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:04:07.307015 | orchestrator | 2025-05-05 01:04:07.307028 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-05 01:04:07.307040 | orchestrator | Monday 05 May 2025 01:03:57 +0000 (0:00:14.429) 0:01:44.328 ************ 2025-05-05 01:04:07.307053 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:04:07.307065 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:04:07.307078 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:04:07.307091 | orchestrator | 2025-05-05 01:04:07.307103 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:04:07.307121 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-05 01:04:10.351034 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:04:10.351143 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:04:10.351175 | orchestrator | 2025-05-05 01:04:10.351205 | orchestrator | 2025-05-05 01:04:10.351241 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:04:10.351268 | orchestrator | Monday 05 May 2025 01:04:07 +0000 (0:00:09.444) 0:01:53.772 ************ 2025-05-05 01:04:10.351287 | orchestrator | =============================================================================== 2025-05-05 01:04:10.351301 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.62s 2025-05-05 01:04:10.351340 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.43s 2025-05-05 01:04:10.351355 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 13.01s 2025-05-05 01:04:10.351428 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.44s 2025-05-05 01:04:10.351457 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.30s 2025-05-05 01:04:10.351472 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.33s 2025-05-05 01:04:10.351486 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.06s 2025-05-05 01:04:10.351501 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.83s 2025-05-05 01:04:10.351515 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.67s 2025-05-05 01:04:10.351530 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.60s 2025-05-05 01:04:10.351544 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.59s 2025-05-05 01:04:10.351558 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.35s 2025-05-05 01:04:10.351574 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.29s 2025-05-05 01:04:10.351590 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.26s 2025-05-05 01:04:10.351607 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.26s 2025-05-05 01:04:10.351624 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.69s 2025-05-05 01:04:10.351640 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.48s 2025-05-05 01:04:10.351656 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.34s 2025-05-05 01:04:10.351672 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.16s 2025-05-05 01:04:10.351688 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.68s 2025-05-05 01:04:10.351705 | orchestrator | 2025-05-05 01:04:07 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:10.351722 | orchestrator | 2025-05-05 01:04:07 | INFO  | Task 69e51023-de4e-4b27-a962-bdbedf89e4e2 is in state SUCCESS 2025-05-05 01:04:10.351738 | orchestrator | 2025-05-05 01:04:07 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:10.351755 | orchestrator | 2025-05-05 01:04:07 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:10.351771 | orchestrator | 2025-05-05 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:10.351803 | orchestrator | 2025-05-05 01:04:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:10.354750 | orchestrator | 2025-05-05 01:04:10 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:10.356876 | orchestrator | 2025-05-05 01:04:10 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:10.357093 | orchestrator | 2025-05-05 01:04:10 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:10.357129 | orchestrator | 2025-05-05 01:04:10 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:13.398404 | orchestrator | 2025-05-05 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:13.398535 | orchestrator | 2025-05-05 01:04:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:13.399023 | orchestrator | 2025-05-05 01:04:13 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:13.399060 | orchestrator | 2025-05-05 01:04:13 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:13.399530 | orchestrator | 2025-05-05 01:04:13 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:13.401330 | orchestrator | 2025-05-05 01:04:13 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:13.401959 | orchestrator | 2025-05-05 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:16.449104 | orchestrator | 2025-05-05 01:04:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:16.450701 | orchestrator | 2025-05-05 01:04:16 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:16.451452 | orchestrator | 2025-05-05 01:04:16 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:16.452158 | orchestrator | 2025-05-05 01:04:16 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:16.455449 | orchestrator | 2025-05-05 01:04:16 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:16.456269 | orchestrator | 2025-05-05 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:19.489892 | orchestrator | 2025-05-05 01:04:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:19.490514 | orchestrator | 2025-05-05 01:04:19 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:19.491300 | orchestrator | 2025-05-05 01:04:19 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:19.496314 | orchestrator | 2025-05-05 01:04:19 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:19.498585 | orchestrator | 2025-05-05 01:04:19 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:19.498976 | orchestrator | 2025-05-05 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:22.551603 | orchestrator | 2025-05-05 01:04:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:22.553128 | orchestrator | 2025-05-05 01:04:22 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state STARTED 2025-05-05 01:04:22.553173 | orchestrator | 2025-05-05 01:04:22 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:22.553819 | orchestrator | 2025-05-05 01:04:22 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:22.554792 | orchestrator | 2025-05-05 01:04:22 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:25.589079 | orchestrator | 2025-05-05 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:25.589205 | orchestrator | 2025-05-05 01:04:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:25.589756 | orchestrator | 2025-05-05 01:04:25 | INFO  | Task bff3436e-9b75-450a-9f69-9b7d389fe71c is in state SUCCESS 2025-05-05 01:04:25.589800 | orchestrator | 2025-05-05 01:04:25 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:25.591412 | orchestrator | 2025-05-05 01:04:25 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:25.591601 | orchestrator | 2025-05-05 01:04:25 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:25.592147 | orchestrator | 2025-05-05 01:04:25 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:28.636107 | orchestrator | 2025-05-05 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:28.636419 | orchestrator | 2025-05-05 01:04:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:28.636459 | orchestrator | 2025-05-05 01:04:28 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:28.637035 | orchestrator | 2025-05-05 01:04:28 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:28.638907 | orchestrator | 2025-05-05 01:04:28 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:28.639303 | orchestrator | 2025-05-05 01:04:28 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:31.682254 | orchestrator | 2025-05-05 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:31.682988 | orchestrator | 2025-05-05 01:04:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:31.683984 | orchestrator | 2025-05-05 01:04:31 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:31.686058 | orchestrator | 2025-05-05 01:04:31 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:31.687273 | orchestrator | 2025-05-05 01:04:31 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:31.690649 | orchestrator | 2025-05-05 01:04:31 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:34.739854 | orchestrator | 2025-05-05 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:34.740002 | orchestrator | 2025-05-05 01:04:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:34.741304 | orchestrator | 2025-05-05 01:04:34 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:34.742880 | orchestrator | 2025-05-05 01:04:34 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:34.744930 | orchestrator | 2025-05-05 01:04:34 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:34.746433 | orchestrator | 2025-05-05 01:04:34 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:34.746903 | orchestrator | 2025-05-05 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:37.784083 | orchestrator | 2025-05-05 01:04:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:37.784976 | orchestrator | 2025-05-05 01:04:37 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:37.785023 | orchestrator | 2025-05-05 01:04:37 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:37.785704 | orchestrator | 2025-05-05 01:04:37 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:37.786517 | orchestrator | 2025-05-05 01:04:37 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:37.786921 | orchestrator | 2025-05-05 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:40.838006 | orchestrator | 2025-05-05 01:04:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:40.838273 | orchestrator | 2025-05-05 01:04:40 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:40.838931 | orchestrator | 2025-05-05 01:04:40 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:40.839647 | orchestrator | 2025-05-05 01:04:40 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:40.840209 | orchestrator | 2025-05-05 01:04:40 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:40.841116 | orchestrator | 2025-05-05 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:43.889319 | orchestrator | 2025-05-05 01:04:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:43.889826 | orchestrator | 2025-05-05 01:04:43 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:43.890716 | orchestrator | 2025-05-05 01:04:43 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:43.892717 | orchestrator | 2025-05-05 01:04:43 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:43.893318 | orchestrator | 2025-05-05 01:04:43 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:46.925883 | orchestrator | 2025-05-05 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:46.926007 | orchestrator | 2025-05-05 01:04:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:46.927797 | orchestrator | 2025-05-05 01:04:46 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:46.929761 | orchestrator | 2025-05-05 01:04:46 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:46.931595 | orchestrator | 2025-05-05 01:04:46 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:46.933750 | orchestrator | 2025-05-05 01:04:46 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:49.972069 | orchestrator | 2025-05-05 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:49.972195 | orchestrator | 2025-05-05 01:04:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:49.973431 | orchestrator | 2025-05-05 01:04:49 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:49.974928 | orchestrator | 2025-05-05 01:04:49 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:49.976690 | orchestrator | 2025-05-05 01:04:49 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:49.978161 | orchestrator | 2025-05-05 01:04:49 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:49.978675 | orchestrator | 2025-05-05 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:53.013549 | orchestrator | 2025-05-05 01:04:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:53.013947 | orchestrator | 2025-05-05 01:04:53 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:53.015582 | orchestrator | 2025-05-05 01:04:53 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:53.016111 | orchestrator | 2025-05-05 01:04:53 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:53.016747 | orchestrator | 2025-05-05 01:04:53 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:56.066690 | orchestrator | 2025-05-05 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:56.066833 | orchestrator | 2025-05-05 01:04:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:56.067448 | orchestrator | 2025-05-05 01:04:56 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:56.069744 | orchestrator | 2025-05-05 01:04:56 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:56.074120 | orchestrator | 2025-05-05 01:04:56 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:56.076846 | orchestrator | 2025-05-05 01:04:56 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:04:56.077458 | orchestrator | 2025-05-05 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:04:59.130335 | orchestrator | 2025-05-05 01:04:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:04:59.131248 | orchestrator | 2025-05-05 01:04:59 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:04:59.132008 | orchestrator | 2025-05-05 01:04:59 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:04:59.133235 | orchestrator | 2025-05-05 01:04:59 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:04:59.133594 | orchestrator | 2025-05-05 01:04:59 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:02.168643 | orchestrator | 2025-05-05 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:02.168784 | orchestrator | 2025-05-05 01:05:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:02.169257 | orchestrator | 2025-05-05 01:05:02 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:02.169293 | orchestrator | 2025-05-05 01:05:02 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:02.169809 | orchestrator | 2025-05-05 01:05:02 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:02.170772 | orchestrator | 2025-05-05 01:05:02 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:05.197230 | orchestrator | 2025-05-05 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:05.197304 | orchestrator | 2025-05-05 01:05:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:05.197642 | orchestrator | 2025-05-05 01:05:05 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:05.197656 | orchestrator | 2025-05-05 01:05:05 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:05.197669 | orchestrator | 2025-05-05 01:05:05 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:05.198202 | orchestrator | 2025-05-05 01:05:05 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:05.198223 | orchestrator | 2025-05-05 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:08.221589 | orchestrator | 2025-05-05 01:05:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:08.222128 | orchestrator | 2025-05-05 01:05:08 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:08.222584 | orchestrator | 2025-05-05 01:05:08 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:08.223411 | orchestrator | 2025-05-05 01:05:08 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:08.224488 | orchestrator | 2025-05-05 01:05:08 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:11.257561 | orchestrator | 2025-05-05 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:11.257689 | orchestrator | 2025-05-05 01:05:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:11.258661 | orchestrator | 2025-05-05 01:05:11 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:11.259931 | orchestrator | 2025-05-05 01:05:11 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:11.260160 | orchestrator | 2025-05-05 01:05:11 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:11.261236 | orchestrator | 2025-05-05 01:05:11 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:14.322879 | orchestrator | 2025-05-05 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:14.322999 | orchestrator | 2025-05-05 01:05:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:14.324771 | orchestrator | 2025-05-05 01:05:14 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:14.325165 | orchestrator | 2025-05-05 01:05:14 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:14.325845 | orchestrator | 2025-05-05 01:05:14 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:14.326389 | orchestrator | 2025-05-05 01:05:14 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:14.326974 | orchestrator | 2025-05-05 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:17.359686 | orchestrator | 2025-05-05 01:05:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:17.359984 | orchestrator | 2025-05-05 01:05:17 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:17.360624 | orchestrator | 2025-05-05 01:05:17 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:17.361513 | orchestrator | 2025-05-05 01:05:17 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:17.364244 | orchestrator | 2025-05-05 01:05:17 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:17.364376 | orchestrator | 2025-05-05 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:20.392806 | orchestrator | 2025-05-05 01:05:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:20.393256 | orchestrator | 2025-05-05 01:05:20 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:20.394886 | orchestrator | 2025-05-05 01:05:20 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:20.396224 | orchestrator | 2025-05-05 01:05:20 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:20.396863 | orchestrator | 2025-05-05 01:05:20 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:20.397019 | orchestrator | 2025-05-05 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:23.428610 | orchestrator | 2025-05-05 01:05:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:23.429019 | orchestrator | 2025-05-05 01:05:23 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:23.429489 | orchestrator | 2025-05-05 01:05:23 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:23.430190 | orchestrator | 2025-05-05 01:05:23 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:23.430714 | orchestrator | 2025-05-05 01:05:23 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:23.430861 | orchestrator | 2025-05-05 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:26.485702 | orchestrator | 2025-05-05 01:05:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:26.485869 | orchestrator | 2025-05-05 01:05:26 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:26.486608 | orchestrator | 2025-05-05 01:05:26 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:26.487244 | orchestrator | 2025-05-05 01:05:26 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:26.487713 | orchestrator | 2025-05-05 01:05:26 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:26.487806 | orchestrator | 2025-05-05 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:29.522784 | orchestrator | 2025-05-05 01:05:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:29.523450 | orchestrator | 2025-05-05 01:05:29 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:29.524126 | orchestrator | 2025-05-05 01:05:29 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state STARTED 2025-05-05 01:05:29.524674 | orchestrator | 2025-05-05 01:05:29 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:29.525450 | orchestrator | 2025-05-05 01:05:29 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:29.525533 | orchestrator | 2025-05-05 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:32.548967 | orchestrator | 2025-05-05 01:05:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:32.551707 | orchestrator | 2025-05-05 01:05:32.551800 | orchestrator | 2025-05-05 01:05:32.551833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:05:32.551879 | orchestrator | 2025-05-05 01:05:32.551908 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:05:32.551935 | orchestrator | Monday 05 May 2025 01:03:53 +0000 (0:00:00.252) 0:00:00.252 ************ 2025-05-05 01:05:32.551963 | orchestrator | ok: [testbed-manager] 2025-05-05 01:05:32.551990 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:05:32.552016 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:05:32.552043 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:05:32.552069 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:05:32.552095 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:05:32.552122 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:05:32.552166 | orchestrator | 2025-05-05 01:05:32.552400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:05:32.552427 | orchestrator | Monday 05 May 2025 01:03:54 +0000 (0:00:00.773) 0:00:01.026 ************ 2025-05-05 01:05:32.552453 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-05 01:05:32.552478 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-05 01:05:32.552502 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-05 01:05:32.552526 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-05 01:05:32.552583 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-05 01:05:32.552619 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-05 01:05:32.552644 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-05 01:05:32.552666 | orchestrator | 2025-05-05 01:05:32.552688 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-05 01:05:32.552709 | orchestrator | 2025-05-05 01:05:32.552730 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-05 01:05:32.552776 | orchestrator | Monday 05 May 2025 01:03:55 +0000 (0:00:00.924) 0:00:01.951 ************ 2025-05-05 01:05:32.552800 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:05:32.552822 | orchestrator | 2025-05-05 01:05:32.552844 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-05 01:05:32.552866 | orchestrator | Monday 05 May 2025 01:03:57 +0000 (0:00:01.414) 0:00:03.365 ************ 2025-05-05 01:05:32.552888 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-05-05 01:05:32.552910 | orchestrator | 2025-05-05 01:05:32.552931 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-05 01:05:32.552952 | orchestrator | Monday 05 May 2025 01:04:00 +0000 (0:00:03.164) 0:00:06.529 ************ 2025-05-05 01:05:32.552977 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-05 01:05:32.553001 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-05 01:05:32.553023 | orchestrator | 2025-05-05 01:05:32.553046 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-05 01:05:32.553068 | orchestrator | Monday 05 May 2025 01:04:05 +0000 (0:00:05.596) 0:00:12.126 ************ 2025-05-05 01:05:32.553089 | orchestrator | ok: [testbed-manager] => (item=service) 2025-05-05 01:05:32.553110 | orchestrator | 2025-05-05 01:05:32.553132 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-05 01:05:32.553154 | orchestrator | Monday 05 May 2025 01:04:08 +0000 (0:00:02.887) 0:00:15.013 ************ 2025-05-05 01:05:32.553175 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:05:32.553195 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-05-05 01:05:32.553215 | orchestrator | 2025-05-05 01:05:32.553245 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-05 01:05:32.553267 | orchestrator | Monday 05 May 2025 01:04:11 +0000 (0:00:03.271) 0:00:18.285 ************ 2025-05-05 01:05:32.553282 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-05-05 01:05:32.553294 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-05-05 01:05:32.553307 | orchestrator | 2025-05-05 01:05:32.553319 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-05 01:05:32.553332 | orchestrator | Monday 05 May 2025 01:04:17 +0000 (0:00:05.562) 0:00:23.847 ************ 2025-05-05 01:05:32.553390 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-05-05 01:05:32.553406 | orchestrator | 2025-05-05 01:05:32.553419 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:05:32.553432 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:05:32.553457 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:05:32.553481 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:05:32.553503 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:05:32.553527 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:05:32.553569 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:05:35.576087 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:05:35.576237 | orchestrator | 2025-05-05 01:05:35.576257 | orchestrator | 2025-05-05 01:05:35.576273 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:05:35.576289 | orchestrator | Monday 05 May 2025 01:04:22 +0000 (0:00:04.533) 0:00:28.380 ************ 2025-05-05 01:05:35.576303 | orchestrator | =============================================================================== 2025-05-05 01:05:35.576317 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.60s 2025-05-05 01:05:35.576331 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.56s 2025-05-05 01:05:35.576400 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.53s 2025-05-05 01:05:35.576417 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.27s 2025-05-05 01:05:35.576431 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.16s 2025-05-05 01:05:35.576446 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.89s 2025-05-05 01:05:35.576460 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.41s 2025-05-05 01:05:35.576475 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-05-05 01:05:35.576489 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2025-05-05 01:05:35.576503 | orchestrator | 2025-05-05 01:05:35.576518 | orchestrator | 2025-05-05 01:05:32 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:35.576532 | orchestrator | 2025-05-05 01:05:32 | INFO  | Task 327edd83-b2d0-4f1a-87b8-85baf2c63484 is in state SUCCESS 2025-05-05 01:05:35.576546 | orchestrator | 2025-05-05 01:05:32 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:35.576561 | orchestrator | 2025-05-05 01:05:32 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:35.576575 | orchestrator | 2025-05-05 01:05:32 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:35.576589 | orchestrator | 2025-05-05 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:35.576620 | orchestrator | 2025-05-05 01:05:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:35.577493 | orchestrator | 2025-05-05 01:05:35 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:35.577529 | orchestrator | 2025-05-05 01:05:35 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:35.580725 | orchestrator | 2025-05-05 01:05:35 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:35.582480 | orchestrator | 2025-05-05 01:05:35 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:38.611249 | orchestrator | 2025-05-05 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:38.611414 | orchestrator | 2025-05-05 01:05:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:38.611712 | orchestrator | 2025-05-05 01:05:38 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:38.611748 | orchestrator | 2025-05-05 01:05:38 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:38.612292 | orchestrator | 2025-05-05 01:05:38 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:38.613111 | orchestrator | 2025-05-05 01:05:38 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:41.640586 | orchestrator | 2025-05-05 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:41.640711 | orchestrator | 2025-05-05 01:05:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:41.643142 | orchestrator | 2025-05-05 01:05:41 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:41.644156 | orchestrator | 2025-05-05 01:05:41 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:41.644201 | orchestrator | 2025-05-05 01:05:41 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:41.644224 | orchestrator | 2025-05-05 01:05:41 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:44.693808 | orchestrator | 2025-05-05 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:44.693927 | orchestrator | 2025-05-05 01:05:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:44.694203 | orchestrator | 2025-05-05 01:05:44 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:44.694787 | orchestrator | 2025-05-05 01:05:44 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:44.695454 | orchestrator | 2025-05-05 01:05:44 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:44.695957 | orchestrator | 2025-05-05 01:05:44 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:44.696101 | orchestrator | 2025-05-05 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:47.727794 | orchestrator | 2025-05-05 01:05:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:47.728610 | orchestrator | 2025-05-05 01:05:47 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:47.728725 | orchestrator | 2025-05-05 01:05:47 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:47.728894 | orchestrator | 2025-05-05 01:05:47 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:47.729632 | orchestrator | 2025-05-05 01:05:47 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:47.729772 | orchestrator | 2025-05-05 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:50.752833 | orchestrator | 2025-05-05 01:05:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:50.753027 | orchestrator | 2025-05-05 01:05:50 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:50.753670 | orchestrator | 2025-05-05 01:05:50 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:50.754133 | orchestrator | 2025-05-05 01:05:50 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:50.754768 | orchestrator | 2025-05-05 01:05:50 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:50.755051 | orchestrator | 2025-05-05 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:53.799568 | orchestrator | 2025-05-05 01:05:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:53.800304 | orchestrator | 2025-05-05 01:05:53 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:53.800386 | orchestrator | 2025-05-05 01:05:53 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:53.800905 | orchestrator | 2025-05-05 01:05:53 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:53.801506 | orchestrator | 2025-05-05 01:05:53 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:56.826728 | orchestrator | 2025-05-05 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:56.826885 | orchestrator | 2025-05-05 01:05:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:56.829460 | orchestrator | 2025-05-05 01:05:56 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:56.830173 | orchestrator | 2025-05-05 01:05:56 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:56.830983 | orchestrator | 2025-05-05 01:05:56 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:56.831613 | orchestrator | 2025-05-05 01:05:56 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:05:56.831775 | orchestrator | 2025-05-05 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:05:59.871888 | orchestrator | 2025-05-05 01:05:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:05:59.871999 | orchestrator | 2025-05-05 01:05:59 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:05:59.872752 | orchestrator | 2025-05-05 01:05:59 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:05:59.873399 | orchestrator | 2025-05-05 01:05:59 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:05:59.874163 | orchestrator | 2025-05-05 01:05:59 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:02.914874 | orchestrator | 2025-05-05 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:02.915040 | orchestrator | 2025-05-05 01:06:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:02.915142 | orchestrator | 2025-05-05 01:06:02 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:02.915476 | orchestrator | 2025-05-05 01:06:02 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:02.916034 | orchestrator | 2025-05-05 01:06:02 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:02.916687 | orchestrator | 2025-05-05 01:06:02 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:02.918897 | orchestrator | 2025-05-05 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:05.962600 | orchestrator | 2025-05-05 01:06:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:09.006060 | orchestrator | 2025-05-05 01:06:05 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:09.006169 | orchestrator | 2025-05-05 01:06:05 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:09.006188 | orchestrator | 2025-05-05 01:06:05 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:09.006203 | orchestrator | 2025-05-05 01:06:05 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:09.006219 | orchestrator | 2025-05-05 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:09.006248 | orchestrator | 2025-05-05 01:06:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:09.006669 | orchestrator | 2025-05-05 01:06:09 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:09.007509 | orchestrator | 2025-05-05 01:06:09 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:09.011549 | orchestrator | 2025-05-05 01:06:09 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:09.012486 | orchestrator | 2025-05-05 01:06:09 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:12.044197 | orchestrator | 2025-05-05 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:12.044363 | orchestrator | 2025-05-05 01:06:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:12.044504 | orchestrator | 2025-05-05 01:06:12 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:12.044908 | orchestrator | 2025-05-05 01:06:12 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:12.045652 | orchestrator | 2025-05-05 01:06:12 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:12.046581 | orchestrator | 2025-05-05 01:06:12 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:15.113651 | orchestrator | 2025-05-05 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:15.113836 | orchestrator | 2025-05-05 01:06:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:15.120730 | orchestrator | 2025-05-05 01:06:15 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:15.121635 | orchestrator | 2025-05-05 01:06:15 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:15.121669 | orchestrator | 2025-05-05 01:06:15 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:15.121691 | orchestrator | 2025-05-05 01:06:15 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:18.164862 | orchestrator | 2025-05-05 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:18.165004 | orchestrator | 2025-05-05 01:06:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:18.169186 | orchestrator | 2025-05-05 01:06:18 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:18.171280 | orchestrator | 2025-05-05 01:06:18 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:18.171379 | orchestrator | 2025-05-05 01:06:18 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:18.172616 | orchestrator | 2025-05-05 01:06:18 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:18.172910 | orchestrator | 2025-05-05 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:21.210952 | orchestrator | 2025-05-05 01:06:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:21.211883 | orchestrator | 2025-05-05 01:06:21 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:21.211930 | orchestrator | 2025-05-05 01:06:21 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:21.212711 | orchestrator | 2025-05-05 01:06:21 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:21.213899 | orchestrator | 2025-05-05 01:06:21 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:24.254118 | orchestrator | 2025-05-05 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:24.254244 | orchestrator | 2025-05-05 01:06:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:24.254580 | orchestrator | 2025-05-05 01:06:24 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:24.255988 | orchestrator | 2025-05-05 01:06:24 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:24.256796 | orchestrator | 2025-05-05 01:06:24 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:24.257762 | orchestrator | 2025-05-05 01:06:24 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:27.322179 | orchestrator | 2025-05-05 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:27.322304 | orchestrator | 2025-05-05 01:06:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:27.323867 | orchestrator | 2025-05-05 01:06:27 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:27.325352 | orchestrator | 2025-05-05 01:06:27 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:27.326688 | orchestrator | 2025-05-05 01:06:27 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:27.328237 | orchestrator | 2025-05-05 01:06:27 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:27.328472 | orchestrator | 2025-05-05 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:30.387982 | orchestrator | 2025-05-05 01:06:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:30.390072 | orchestrator | 2025-05-05 01:06:30 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:30.391549 | orchestrator | 2025-05-05 01:06:30 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:30.392349 | orchestrator | 2025-05-05 01:06:30 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:30.392383 | orchestrator | 2025-05-05 01:06:30 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:33.443902 | orchestrator | 2025-05-05 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:33.444054 | orchestrator | 2025-05-05 01:06:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:33.445669 | orchestrator | 2025-05-05 01:06:33 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:33.445712 | orchestrator | 2025-05-05 01:06:33 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:33.447076 | orchestrator | 2025-05-05 01:06:33 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:33.450118 | orchestrator | 2025-05-05 01:06:33 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:36.501029 | orchestrator | 2025-05-05 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:36.501214 | orchestrator | 2025-05-05 01:06:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:36.501770 | orchestrator | 2025-05-05 01:06:36 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:36.502789 | orchestrator | 2025-05-05 01:06:36 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:36.504253 | orchestrator | 2025-05-05 01:06:36 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:36.505673 | orchestrator | 2025-05-05 01:06:36 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:39.549363 | orchestrator | 2025-05-05 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:39.549554 | orchestrator | 2025-05-05 01:06:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:39.550146 | orchestrator | 2025-05-05 01:06:39 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:39.551713 | orchestrator | 2025-05-05 01:06:39 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:39.553137 | orchestrator | 2025-05-05 01:06:39 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:39.555177 | orchestrator | 2025-05-05 01:06:39 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:42.630459 | orchestrator | 2025-05-05 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:42.630609 | orchestrator | 2025-05-05 01:06:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:42.633800 | orchestrator | 2025-05-05 01:06:42 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:42.638627 | orchestrator | 2025-05-05 01:06:42 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:42.641134 | orchestrator | 2025-05-05 01:06:42 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:42.642920 | orchestrator | 2025-05-05 01:06:42 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:42.643243 | orchestrator | 2025-05-05 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:45.693874 | orchestrator | 2025-05-05 01:06:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:45.695056 | orchestrator | 2025-05-05 01:06:45 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:45.696202 | orchestrator | 2025-05-05 01:06:45 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:45.697432 | orchestrator | 2025-05-05 01:06:45 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:45.698995 | orchestrator | 2025-05-05 01:06:45 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:48.748073 | orchestrator | 2025-05-05 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:48.748218 | orchestrator | 2025-05-05 01:06:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:48.749418 | orchestrator | 2025-05-05 01:06:48 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:48.753170 | orchestrator | 2025-05-05 01:06:48 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:48.755050 | orchestrator | 2025-05-05 01:06:48 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state STARTED 2025-05-05 01:06:48.757907 | orchestrator | 2025-05-05 01:06:48 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:51.803231 | orchestrator | 2025-05-05 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:51.803447 | orchestrator | 2025-05-05 01:06:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:51.805163 | orchestrator | 2025-05-05 01:06:51 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:51.806551 | orchestrator | 2025-05-05 01:06:51 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:51.810628 | orchestrator | 2025-05-05 01:06:51 | INFO  | Task 1ea65f44-84ae-4e77-a3a6-8ec0dcab82b3 is in state SUCCESS 2025-05-05 01:06:51.812398 | orchestrator | 2025-05-05 01:06:51.812450 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-05 01:06:51.812466 | orchestrator | 2025-05-05 01:06:51.812482 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-05 01:06:51.812497 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.154) 0:00:00.154 ************ 2025-05-05 01:06:51.812558 | orchestrator | changed: [localhost] 2025-05-05 01:06:51.813190 | orchestrator | 2025-05-05 01:06:51.813233 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-05 01:06:51.813254 | orchestrator | Monday 05 May 2025 00:59:16 +0000 (0:00:00.670) 0:00:00.825 ************ 2025-05-05 01:06:51.813279 | orchestrator | 2025-05-05 01:06:51.813361 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-05 01:06:51.813384 | orchestrator | 2025-05-05 01:06:51.813398 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-05 01:06:51.813412 | orchestrator | 2025-05-05 01:06:51.813440 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-05 01:06:51.813455 | orchestrator | 2025-05-05 01:06:51.813469 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-05 01:06:51.813483 | orchestrator | 2025-05-05 01:06:51.813497 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-05 01:06:51.813511 | orchestrator | 2025-05-05 01:06:51.814371 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-05 01:06:51.814401 | orchestrator | 2025-05-05 01:06:51.814419 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-05 01:06:51.814434 | orchestrator | changed: [localhost] 2025-05-05 01:06:51.814451 | orchestrator | 2025-05-05 01:06:51.814467 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-05 01:06:51.814483 | orchestrator | Monday 05 May 2025 01:05:15 +0000 (0:05:58.451) 0:05:59.277 ************ 2025-05-05 01:06:51.814499 | orchestrator | changed: [localhost] 2025-05-05 01:06:51.814516 | orchestrator | 2025-05-05 01:06:51.814547 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:06:51.814564 | orchestrator | 2025-05-05 01:06:51.814578 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:06:51.814592 | orchestrator | Monday 05 May 2025 01:05:28 +0000 (0:00:13.321) 0:06:12.598 ************ 2025-05-05 01:06:51.814607 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:06:51.814621 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:06:51.814635 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:06:51.814649 | orchestrator | 2025-05-05 01:06:51.814703 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:06:51.814766 | orchestrator | Monday 05 May 2025 01:05:28 +0000 (0:00:00.411) 0:06:13.010 ************ 2025-05-05 01:06:51.814783 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-05 01:06:51.815634 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-05 01:06:51.815679 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-05 01:06:51.815704 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-05 01:06:51.815721 | orchestrator | 2025-05-05 01:06:51.815736 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-05 01:06:51.815751 | orchestrator | skipping: no hosts matched 2025-05-05 01:06:51.815774 | orchestrator | 2025-05-05 01:06:51.815789 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:06:51.815839 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:06:51.816362 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:06:51.816649 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:06:51.816690 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:06:51.816707 | orchestrator | 2025-05-05 01:06:51.816727 | orchestrator | 2025-05-05 01:06:51.816750 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:06:51.816773 | orchestrator | Monday 05 May 2025 01:05:29 +0000 (0:00:00.496) 0:06:13.507 ************ 2025-05-05 01:06:51.816796 | orchestrator | =============================================================================== 2025-05-05 01:06:51.816810 | orchestrator | Download ironic-agent initramfs --------------------------------------- 358.45s 2025-05-05 01:06:51.816823 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.32s 2025-05-05 01:06:51.816835 | orchestrator | Ensure the destination directory exists --------------------------------- 0.67s 2025-05-05 01:06:51.816847 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-05-05 01:06:51.816860 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-05-05 01:06:51.816872 | orchestrator | 2025-05-05 01:06:51.816884 | orchestrator | 2025-05-05 01:06:51.816897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:06:51.816909 | orchestrator | 2025-05-05 01:06:51.816922 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:06:51.816934 | orchestrator | Monday 05 May 2025 01:02:36 +0000 (0:00:00.265) 0:00:00.265 ************ 2025-05-05 01:06:51.816946 | orchestrator | ok: [testbed-manager] 2025-05-05 01:06:51.816959 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:06:51.816972 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:06:51.816984 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:06:51.816997 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:06:51.817009 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:06:51.817021 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:06:51.817033 | orchestrator | 2025-05-05 01:06:51.817046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:06:51.817059 | orchestrator | Monday 05 May 2025 01:02:37 +0000 (0:00:00.726) 0:00:00.991 ************ 2025-05-05 01:06:51.817083 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-05 01:06:51.817097 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-05 01:06:51.817110 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-05 01:06:51.817122 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-05 01:06:51.817135 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-05 01:06:51.817147 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-05 01:06:51.817160 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-05 01:06:51.817172 | orchestrator | 2025-05-05 01:06:51.817185 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-05 01:06:51.817197 | orchestrator | 2025-05-05 01:06:51.817210 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-05 01:06:51.817222 | orchestrator | Monday 05 May 2025 01:02:37 +0000 (0:00:00.639) 0:00:01.631 ************ 2025-05-05 01:06:51.817235 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:06:51.817249 | orchestrator | 2025-05-05 01:06:51.817261 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-05 01:06:51.817273 | orchestrator | Monday 05 May 2025 01:02:39 +0000 (0:00:01.079) 0:00:02.710 ************ 2025-05-05 01:06:51.817288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.817339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.817355 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-05 01:06:51.817402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.817430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.817444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.817464 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.817477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.817490 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.817538 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.817565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.817584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.817597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.817611 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.817625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.817654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.817701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.817715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.817762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817796 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-05 01:06:51.817831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.817845 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.817858 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.817885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.817906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.817926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.817940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.817975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.817996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.818063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.818080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.818132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.818153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.818174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.818188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.818225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.818265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.818278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.818371 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.818401 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.818450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.818491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.818514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.818527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.818553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.818579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.818618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.818641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.818679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.818693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.818730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.818757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.818783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.818810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.818865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.818879 | orchestrator | 2025-05-05 01:06:51.818891 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-05 01:06:51.818901 | orchestrator | Monday 05 May 2025 01:02:42 +0000 (0:00:03.200) 0:00:05.910 ************ 2025-05-05 01:06:51.818912 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:06:51.818923 | orchestrator | 2025-05-05 01:06:51.818933 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-05 01:06:51.818943 | orchestrator | Monday 05 May 2025 01:02:43 +0000 (0:00:01.709) 0:00:07.619 ************ 2025-05-05 01:06:51.818954 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-05 01:06:51.818965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.818977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.818987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.819003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.819018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.819037 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.819049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.819071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819142 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819294 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-05 01:06:51.819333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.819384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.819462 | orchestrator | 2025-05-05 01:06:51.819479 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-05 01:06:51.819496 | orchestrator | Monday 05 May 2025 01:02:49 +0000 (0:00:05.350) 0:00:12.970 ************ 2025-05-05 01:06:51.819515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.819533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.819590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819611 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.819636 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.819701 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.819763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.819794 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819812 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.819830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.819849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.819921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819932 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.819942 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.819953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.819973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.819984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820026 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.820042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820080 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.820091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820136 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.820147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820185 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.820195 | orchestrator | 2025-05-05 01:06:51.820206 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-05 01:06:51.820216 | orchestrator | Monday 05 May 2025 01:02:51 +0000 (0:00:02.438) 0:00:15.409 ************ 2025-05-05 01:06:51.820227 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.820243 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820253 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.820290 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820492 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.820503 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.820513 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.820524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.820595 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.820605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820643 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.820654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820696 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.820706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-05 01:06:51.820728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.820763 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.820778 | orchestrator | 2025-05-05 01:06:51.820789 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-05 01:06:51.820800 | orchestrator | Monday 05 May 2025 01:02:54 +0000 (0:00:02.650) 0:00:18.059 ************ 2025-05-05 01:06:51.820810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.820830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.820841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.820861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.820895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.820925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.820945 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-05 01:06:51.820963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.820980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.821031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.821051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.821062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.821073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821105 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.821138 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821162 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.821185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.821229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.821275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.821396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.821472 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.821483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.821504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.821538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.821698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.821716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.821728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.821789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.821860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.821872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.821914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821932 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-05 01:06:51.821941 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.821951 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.821960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.821970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.821991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.822008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.822058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.822081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.822107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.822123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.822138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.822148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.822157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.822178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.822188 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.822202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.822212 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.822221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.822230 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.822293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.822345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.822366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.822394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.822421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.822446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.822456 | orchestrator | 2025-05-05 01:06:51.822467 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-05 01:06:51.822477 | orchestrator | Monday 05 May 2025 01:03:01 +0000 (0:00:07.179) 0:00:25.239 ************ 2025-05-05 01:06:51.822487 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 01:06:51.822497 | orchestrator | 2025-05-05 01:06:51.822507 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-05 01:06:51.822517 | orchestrator | Monday 05 May 2025 01:03:02 +0000 (0:00:01.074) 0:00:26.313 ************ 2025-05-05 01:06:51.822527 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1114156, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822538 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1114156, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1114156, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822572 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1114156, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822582 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1114156, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822595 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1114156, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822606 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1114177, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822617 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1114177, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822639 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1114177, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822650 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1114177, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822659 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1114177, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822668 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1114156, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.822677 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1114177, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822691 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1114164, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822700 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1114164, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822723 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1114164, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822733 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1114172, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822742 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1114164, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822751 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1114164, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822760 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1114164, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1114172, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822786 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1114172, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1114198, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822813 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1114172, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822822 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1114172, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1114172, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822841 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1114198, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822854 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1114180, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2714608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822870 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1114198, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822885 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1114198, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822894 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1114177, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.822903 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1114198, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822912 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1114198, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1114171, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822936 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1114180, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2714608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822950 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1114180, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2714608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822966 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1114180, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2714608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822976 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1114180, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2714608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822985 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1114178, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.822994 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1114180, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2714608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823003 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1114171, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823016 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1114171, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823036 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1114171, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823046 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1114196, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823055 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1114178, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823064 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1114171, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823073 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1114171, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823082 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1114178, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823096 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1114178, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823117 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1114164, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2684608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823127 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1114178, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823136 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1114196, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1114196, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823154 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1114167, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2694607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823163 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1114178, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823183 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1114196, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823198 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1114167, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2694607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823208 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1114196, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823217 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1114167, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2694607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823226 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1114185, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2724607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823235 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.823244 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1114196, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823253 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1114167, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2694607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823277 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1114167, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2694607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823369 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1114185, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2724607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823382 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.823392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1114185, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2724607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823401 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.823410 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1114167, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2694607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823419 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1114185, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2724607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823428 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.823437 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1114185, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2724607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823455 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.823473 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1114172, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823506 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1114185, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2724607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-05 01:06:51.823516 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.823526 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1114198, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823535 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1114180, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2714608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823544 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1114171, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823553 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1114178, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2704608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823562 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1114196, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2734609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823585 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1114167, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2694607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823613 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1114185, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2724607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-05 01:06:51.823624 | orchestrator | 2025-05-05 01:06:51.823633 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-05 01:06:51.823642 | orchestrator | Monday 05 May 2025 01:03:42 +0000 (0:00:39.488) 0:01:05.802 ************ 2025-05-05 01:06:51.823651 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 01:06:51.823660 | orchestrator | 2025-05-05 01:06:51.823668 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-05 01:06:51.823677 | orchestrator | Monday 05 May 2025 01:03:42 +0000 (0:00:00.445) 0:01:06.248 ************ 2025-05-05 01:06:51.823686 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.823695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823704 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-05 01:06:51.823713 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823722 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-05 01:06:51.823731 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 01:06:51.823740 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.823748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823757 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-05 01:06:51.823766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823775 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-05 01:06:51.823783 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 01:06:51.823792 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.823801 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823810 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-05 01:06:51.823819 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823827 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-05 01:06:51.823836 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.823845 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823890 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-05 01:06:51.823900 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823909 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-05 01:06:51.823924 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.823933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823941 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-05 01:06:51.823950 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823959 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-05 01:06:51.823967 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.823976 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.823985 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-05 01:06:51.823994 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.824002 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-05 01:06:51.824011 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.824020 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.824029 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-05 01:06:51.824037 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-05 01:06:51.824046 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-05 01:06:51.824055 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-05 01:06:51.824063 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-05 01:06:51.824072 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-05 01:06:51.824081 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-05 01:06:51.824089 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-05 01:06:51.824098 | orchestrator | 2025-05-05 01:06:51.824107 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-05 01:06:51.824115 | orchestrator | Monday 05 May 2025 01:03:44 +0000 (0:00:01.769) 0:01:08.017 ************ 2025-05-05 01:06:51.824124 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-05 01:06:51.824133 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.824142 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-05 01:06:51.824151 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.824160 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-05 01:06:51.824169 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.824206 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-05 01:06:51.824223 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.824239 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-05 01:06:51.824260 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.824278 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-05 01:06:51.824294 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.824343 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-05 01:06:51.824356 | orchestrator | 2025-05-05 01:06:51.824369 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-05 01:06:51.824381 | orchestrator | Monday 05 May 2025 01:04:00 +0000 (0:00:16.113) 0:01:24.130 ************ 2025-05-05 01:06:51.824393 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-05 01:06:51.824405 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.824417 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-05 01:06:51.824432 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.824447 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-05 01:06:51.824472 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.824487 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-05 01:06:51.824501 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.824515 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-05 01:06:51.824528 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.824542 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-05 01:06:51.824556 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.824570 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-05 01:06:51.824584 | orchestrator | 2025-05-05 01:06:51.824598 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-05 01:06:51.824612 | orchestrator | Monday 05 May 2025 01:04:04 +0000 (0:00:04.371) 0:01:28.502 ************ 2025-05-05 01:06:51.824626 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-05 01:06:51.824640 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.824656 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-05 01:06:51.824670 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.824684 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-05 01:06:51.824699 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.824713 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-05 01:06:51.824727 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.824742 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-05 01:06:51.824757 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.824772 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-05 01:06:51.824787 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.824802 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-05 01:06:51.824817 | orchestrator | 2025-05-05 01:06:51.824840 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-05 01:06:51.824856 | orchestrator | Monday 05 May 2025 01:04:08 +0000 (0:00:03.527) 0:01:32.029 ************ 2025-05-05 01:06:51.824870 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 01:06:51.824885 | orchestrator | 2025-05-05 01:06:51.824895 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-05 01:06:51.824904 | orchestrator | Monday 05 May 2025 01:04:08 +0000 (0:00:00.447) 0:01:32.476 ************ 2025-05-05 01:06:51.824914 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.824923 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.824933 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.824948 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.824963 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.824979 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.824995 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.825011 | orchestrator | 2025-05-05 01:06:51.825026 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-05 01:06:51.825040 | orchestrator | Monday 05 May 2025 01:04:09 +0000 (0:00:00.587) 0:01:33.063 ************ 2025-05-05 01:06:51.825050 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.825068 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.825078 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.825087 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.825096 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:06:51.825105 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:06:51.825115 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:06:51.825124 | orchestrator | 2025-05-05 01:06:51.825140 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-05 01:06:51.825150 | orchestrator | Monday 05 May 2025 01:04:12 +0000 (0:00:03.582) 0:01:36.646 ************ 2025-05-05 01:06:51.825160 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-05 01:06:51.825169 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.825179 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-05 01:06:51.825188 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.825198 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-05 01:06:51.825207 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.825217 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-05 01:06:51.825227 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.825243 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-05 01:06:51.825253 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.825263 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-05 01:06:51.825273 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.825283 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-05 01:06:51.825293 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.825327 | orchestrator | 2025-05-05 01:06:51.825337 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-05 01:06:51.825347 | orchestrator | Monday 05 May 2025 01:04:15 +0000 (0:00:02.539) 0:01:39.185 ************ 2025-05-05 01:06:51.825356 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-05 01:06:51.825366 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.825375 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-05 01:06:51.825385 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.825394 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-05 01:06:51.825404 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.825413 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-05 01:06:51.825423 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.825432 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-05 01:06:51.825442 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.825451 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-05 01:06:51.825460 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.825470 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-05 01:06:51.825479 | orchestrator | 2025-05-05 01:06:51.825488 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-05 01:06:51.825497 | orchestrator | Monday 05 May 2025 01:04:18 +0000 (0:00:03.249) 0:01:42.435 ************ 2025-05-05 01:06:51.825507 | orchestrator | [WARNING]: Skipped 2025-05-05 01:06:51.825516 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-05 01:06:51.825533 | orchestrator | due to this access issue: 2025-05-05 01:06:51.825542 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-05 01:06:51.825551 | orchestrator | not a directory 2025-05-05 01:06:51.825561 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-05 01:06:51.825570 | orchestrator | 2025-05-05 01:06:51.825580 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-05 01:06:51.825589 | orchestrator | Monday 05 May 2025 01:04:20 +0000 (0:00:01.530) 0:01:43.966 ************ 2025-05-05 01:06:51.825598 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.825607 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.825617 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.825626 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.825636 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.825645 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.825654 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.825663 | orchestrator | 2025-05-05 01:06:51.825673 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-05 01:06:51.825682 | orchestrator | Monday 05 May 2025 01:04:21 +0000 (0:00:00.792) 0:01:44.758 ************ 2025-05-05 01:06:51.825692 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.825701 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.825711 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.825720 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.825729 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.825739 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.825748 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.825757 | orchestrator | 2025-05-05 01:06:51.825766 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-05 01:06:51.825776 | orchestrator | Monday 05 May 2025 01:04:21 +0000 (0:00:00.726) 0:01:45.485 ************ 2025-05-05 01:06:51.825785 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-05 01:06:51.825794 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.825810 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-05 01:06:51.825819 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.825829 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-05 01:06:51.825838 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.825848 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-05 01:06:51.825857 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.825867 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-05 01:06:51.825876 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.825886 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-05 01:06:51.825895 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.825904 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-05 01:06:51.825914 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.825923 | orchestrator | 2025-05-05 01:06:51.825932 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-05 01:06:51.825942 | orchestrator | Monday 05 May 2025 01:04:25 +0000 (0:00:03.329) 0:01:48.814 ************ 2025-05-05 01:06:51.825951 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-05 01:06:51.825960 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:06:51.825969 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-05 01:06:51.825984 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:06:51.825993 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-05 01:06:51.826003 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:06:51.826012 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-05 01:06:51.826050 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:06:51.826065 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-05 01:06:51.826074 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:06:51.826084 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-05 01:06:51.826093 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:06:51.826103 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-05 01:06:51.826112 | orchestrator | skipping: [testbed-manager] 2025-05-05 01:06:51.826121 | orchestrator | 2025-05-05 01:06:51.826131 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-05 01:06:51.826140 | orchestrator | Monday 05 May 2025 01:04:27 +0000 (0:00:02.696) 0:01:51.510 ************ 2025-05-05 01:06:51.826151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.826163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.826185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.826211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.826228 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-05 01:06:51.826238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.826248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-05 01:06:51.826262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.826273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.826296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.826358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.826368 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.826378 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826393 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.826426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-05 01:06:51.826456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.826497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.826523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.826534 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.826591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.826601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.826622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.826631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.826688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.826698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826731 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-05 01:06:51.826746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.826756 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.826805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.826825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.826835 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.826845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826892 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.826912 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.826926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.826954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.826965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.826975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.827001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.827012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-05 01:06:51.827022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-05 01:06:51.827032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-05 01:06:51.827046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.827061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.827078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.827089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.827099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.827109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.827119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.827128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.827143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-05 01:06:51.827158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.827176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-05 01:06:51.827186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-05 01:06:51.827196 | orchestrator | 2025-05-05 01:06:51.827206 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-05 01:06:51.827215 | orchestrator | Monday 05 May 2025 01:04:32 +0000 (0:00:04.884) 0:01:56.395 ************ 2025-05-05 01:06:51.827225 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-05 01:06:51.827234 | orchestrator | 2025-05-05 01:06:51.827244 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-05 01:06:51.827253 | orchestrator | Monday 05 May 2025 01:04:35 +0000 (0:00:02.879) 0:01:59.274 ************ 2025-05-05 01:06:51.827262 | orchestrator | 2025-05-05 01:06:51.827272 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-05 01:06:51.827348 | orchestrator | Monday 05 May 2025 01:04:35 +0000 (0:00:00.057) 0:01:59.331 ************ 2025-05-05 01:06:51.827359 | orchestrator | 2025-05-05 01:06:51.827369 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-05 01:06:51.827378 | orchestrator | Monday 05 May 2025 01:04:35 +0000 (0:00:00.251) 0:01:59.583 ************ 2025-05-05 01:06:51.827387 | orchestrator | 2025-05-05 01:06:51.827440 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-05 01:06:51.827452 | orchestrator | Monday 05 May 2025 01:04:35 +0000 (0:00:00.056) 0:01:59.639 ************ 2025-05-05 01:06:51.827469 | orchestrator | 2025-05-05 01:06:51.827478 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-05 01:06:51.827488 | orchestrator | Monday 05 May 2025 01:04:35 +0000 (0:00:00.056) 0:01:59.696 ************ 2025-05-05 01:06:51.827497 | orchestrator | 2025-05-05 01:06:51.827506 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-05 01:06:51.827515 | orchestrator | Monday 05 May 2025 01:04:36 +0000 (0:00:00.053) 0:01:59.750 ************ 2025-05-05 01:06:51.827524 | orchestrator | 2025-05-05 01:06:51.827537 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-05 01:06:51.827584 | orchestrator | Monday 05 May 2025 01:04:36 +0000 (0:00:00.344) 0:02:00.094 ************ 2025-05-05 01:06:51.827599 | orchestrator | 2025-05-05 01:06:51.827614 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-05 01:06:51.827629 | orchestrator | Monday 05 May 2025 01:04:36 +0000 (0:00:00.092) 0:02:00.187 ************ 2025-05-05 01:06:51.827645 | orchestrator | changed: [testbed-manager] 2025-05-05 01:06:51.827661 | orchestrator | 2025-05-05 01:06:51.827676 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-05 01:06:51.827688 | orchestrator | Monday 05 May 2025 01:04:59 +0000 (0:00:23.434) 0:02:23.621 ************ 2025-05-05 01:06:51.827697 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:06:51.827707 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:06:51.827716 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:06:51.827726 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:06:51.827735 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:06:51.827744 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:06:51.827754 | orchestrator | changed: [testbed-manager] 2025-05-05 01:06:51.827763 | orchestrator | 2025-05-05 01:06:51.827772 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-05 01:06:51.827782 | orchestrator | Monday 05 May 2025 01:05:19 +0000 (0:00:19.717) 0:02:43.339 ************ 2025-05-05 01:06:51.827791 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:06:51.827800 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:06:51.827809 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:06:51.827824 | orchestrator | 2025-05-05 01:06:51.827834 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-05 01:06:51.827843 | orchestrator | Monday 05 May 2025 01:05:33 +0000 (0:00:14.034) 0:02:57.373 ************ 2025-05-05 01:06:51.827852 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:06:51.827862 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:06:51.827871 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:06:51.827880 | orchestrator | 2025-05-05 01:06:51.827890 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-05 01:06:51.827899 | orchestrator | Monday 05 May 2025 01:05:48 +0000 (0:00:14.915) 0:03:12.289 ************ 2025-05-05 01:06:51.827908 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:06:51.827925 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:06:51.827935 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:06:51.827944 | orchestrator | changed: [testbed-manager] 2025-05-05 01:06:51.827953 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:06:51.827963 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:06:51.827972 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:06:51.827981 | orchestrator | 2025-05-05 01:06:51.827990 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-05 01:06:51.828000 | orchestrator | Monday 05 May 2025 01:06:08 +0000 (0:00:20.354) 0:03:32.644 ************ 2025-05-05 01:06:51.828009 | orchestrator | changed: [testbed-manager] 2025-05-05 01:06:51.828018 | orchestrator | 2025-05-05 01:06:51.828027 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-05 01:06:51.828037 | orchestrator | Monday 05 May 2025 01:06:18 +0000 (0:00:09.906) 0:03:42.551 ************ 2025-05-05 01:06:51.828046 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:06:51.828055 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:06:51.828071 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:06:51.828081 | orchestrator | 2025-05-05 01:06:51.828090 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-05 01:06:51.828099 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:11.906) 0:03:54.457 ************ 2025-05-05 01:06:51.828108 | orchestrator | changed: [testbed-manager] 2025-05-05 01:06:51.828118 | orchestrator | 2025-05-05 01:06:51.828127 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-05 01:06:51.828136 | orchestrator | Monday 05 May 2025 01:06:39 +0000 (0:00:08.519) 0:04:02.977 ************ 2025-05-05 01:06:51.828146 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:06:51.828155 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:06:51.828164 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:06:51.828173 | orchestrator | 2025-05-05 01:06:51.828182 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:06:51.828192 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-05 01:06:51.828203 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-05 01:06:51.828212 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-05 01:06:51.828222 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-05 01:06:51.828231 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-05 01:06:51.828241 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-05 01:06:51.828250 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-05 01:06:51.828259 | orchestrator | 2025-05-05 01:06:51.828268 | orchestrator | 2025-05-05 01:06:51.828278 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:06:51.828287 | orchestrator | Monday 05 May 2025 01:06:51 +0000 (0:00:11.929) 0:04:14.907 ************ 2025-05-05 01:06:51.828316 | orchestrator | =============================================================================== 2025-05-05 01:06:51.828331 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 39.49s 2025-05-05 01:06:51.828347 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.43s 2025-05-05 01:06:51.828357 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 20.35s 2025-05-05 01:06:51.828366 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.72s 2025-05-05 01:06:51.828376 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.11s 2025-05-05 01:06:51.828385 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 14.92s 2025-05-05 01:06:51.828394 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 14.03s 2025-05-05 01:06:51.828403 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.93s 2025-05-05 01:06:51.828412 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.91s 2025-05-05 01:06:51.828422 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.91s 2025-05-05 01:06:51.828431 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.52s 2025-05-05 01:06:51.828440 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.18s 2025-05-05 01:06:51.828449 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.35s 2025-05-05 01:06:51.828464 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.88s 2025-05-05 01:06:51.828473 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.37s 2025-05-05 01:06:51.828483 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.58s 2025-05-05 01:06:51.828492 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.53s 2025-05-05 01:06:51.828501 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 3.33s 2025-05-05 01:06:51.828514 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.25s 2025-05-05 01:06:54.849191 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.20s 2025-05-05 01:06:54.849403 | orchestrator | 2025-05-05 01:06:51 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:54.849429 | orchestrator | 2025-05-05 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:54.849462 | orchestrator | 2025-05-05 01:06:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:54.851693 | orchestrator | 2025-05-05 01:06:54 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:06:54.853211 | orchestrator | 2025-05-05 01:06:54 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:54.854780 | orchestrator | 2025-05-05 01:06:54 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:54.857569 | orchestrator | 2025-05-05 01:06:54 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:06:57.881505 | orchestrator | 2025-05-05 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:06:57.881625 | orchestrator | 2025-05-05 01:06:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:06:57.882675 | orchestrator | 2025-05-05 01:06:57 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:06:57.883041 | orchestrator | 2025-05-05 01:06:57 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:06:57.883513 | orchestrator | 2025-05-05 01:06:57 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:06:57.884042 | orchestrator | 2025-05-05 01:06:57 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:00.917161 | orchestrator | 2025-05-05 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:00.917328 | orchestrator | 2025-05-05 01:07:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:00.917749 | orchestrator | 2025-05-05 01:07:00 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:00.918229 | orchestrator | 2025-05-05 01:07:00 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:00.919061 | orchestrator | 2025-05-05 01:07:00 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:00.920053 | orchestrator | 2025-05-05 01:07:00 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:03.966891 | orchestrator | 2025-05-05 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:03.967016 | orchestrator | 2025-05-05 01:07:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:03.969419 | orchestrator | 2025-05-05 01:07:03 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:03.971679 | orchestrator | 2025-05-05 01:07:03 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:03.973597 | orchestrator | 2025-05-05 01:07:03 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:03.975081 | orchestrator | 2025-05-05 01:07:03 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:03.975269 | orchestrator | 2025-05-05 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:07.038969 | orchestrator | 2025-05-05 01:07:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:07.039176 | orchestrator | 2025-05-05 01:07:07 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:07.040442 | orchestrator | 2025-05-05 01:07:07 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:07.041247 | orchestrator | 2025-05-05 01:07:07 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:07.042490 | orchestrator | 2025-05-05 01:07:07 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:10.084362 | orchestrator | 2025-05-05 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:10.084486 | orchestrator | 2025-05-05 01:07:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:10.086162 | orchestrator | 2025-05-05 01:07:10 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:10.088352 | orchestrator | 2025-05-05 01:07:10 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:10.090109 | orchestrator | 2025-05-05 01:07:10 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:10.091756 | orchestrator | 2025-05-05 01:07:10 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:10.092126 | orchestrator | 2025-05-05 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:13.130481 | orchestrator | 2025-05-05 01:07:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:13.131562 | orchestrator | 2025-05-05 01:07:13 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:13.133041 | orchestrator | 2025-05-05 01:07:13 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:13.134231 | orchestrator | 2025-05-05 01:07:13 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:13.135617 | orchestrator | 2025-05-05 01:07:13 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:13.136170 | orchestrator | 2025-05-05 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:16.194960 | orchestrator | 2025-05-05 01:07:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:16.196994 | orchestrator | 2025-05-05 01:07:16 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:16.199005 | orchestrator | 2025-05-05 01:07:16 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:16.201129 | orchestrator | 2025-05-05 01:07:16 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:16.203503 | orchestrator | 2025-05-05 01:07:16 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:16.203717 | orchestrator | 2025-05-05 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:19.256953 | orchestrator | 2025-05-05 01:07:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:19.258124 | orchestrator | 2025-05-05 01:07:19 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:19.260927 | orchestrator | 2025-05-05 01:07:19 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:19.265596 | orchestrator | 2025-05-05 01:07:19 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:19.267403 | orchestrator | 2025-05-05 01:07:19 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:22.315177 | orchestrator | 2025-05-05 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:22.315414 | orchestrator | 2025-05-05 01:07:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:22.315574 | orchestrator | 2025-05-05 01:07:22 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:22.315597 | orchestrator | 2025-05-05 01:07:22 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:22.315610 | orchestrator | 2025-05-05 01:07:22 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:22.315629 | orchestrator | 2025-05-05 01:07:22 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:25.365068 | orchestrator | 2025-05-05 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:25.365227 | orchestrator | 2025-05-05 01:07:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:25.366563 | orchestrator | 2025-05-05 01:07:25 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:25.367319 | orchestrator | 2025-05-05 01:07:25 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:25.369355 | orchestrator | 2025-05-05 01:07:25 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state STARTED 2025-05-05 01:07:25.369534 | orchestrator | 2025-05-05 01:07:25 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:28.431808 | orchestrator | 2025-05-05 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:28.431952 | orchestrator | 2025-05-05 01:07:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:28.434011 | orchestrator | 2025-05-05 01:07:28 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:28.435853 | orchestrator | 2025-05-05 01:07:28 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:28.438183 | orchestrator | 2025-05-05 01:07:28 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:28.440267 | orchestrator | 2025-05-05 01:07:28 | INFO  | Task 265a3eba-a2cb-4c60-a23d-b4a4809a4f15 is in state SUCCESS 2025-05-05 01:07:28.442529 | orchestrator | 2025-05-05 01:07:28.442698 | orchestrator | 2025-05-05 01:07:28.442720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:07:28.442845 | orchestrator | 2025-05-05 01:07:28.443081 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:07:28.443107 | orchestrator | Monday 05 May 2025 01:04:09 +0000 (0:00:00.271) 0:00:00.271 ************ 2025-05-05 01:07:28.443122 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:07:28.443138 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:07:28.443152 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:07:28.443167 | orchestrator | 2025-05-05 01:07:28.443181 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:07:28.443196 | orchestrator | Monday 05 May 2025 01:04:10 +0000 (0:00:00.645) 0:00:00.917 ************ 2025-05-05 01:07:28.443210 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-05 01:07:28.443251 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-05 01:07:28.443266 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-05 01:07:28.443315 | orchestrator | 2025-05-05 01:07:28.443339 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-05 01:07:28.443354 | orchestrator | 2025-05-05 01:07:28.443368 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-05 01:07:28.443383 | orchestrator | Monday 05 May 2025 01:04:11 +0000 (0:00:00.580) 0:00:01.498 ************ 2025-05-05 01:07:28.443397 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:07:28.443413 | orchestrator | 2025-05-05 01:07:28.443427 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-05 01:07:28.443442 | orchestrator | Monday 05 May 2025 01:04:11 +0000 (0:00:00.764) 0:00:02.263 ************ 2025-05-05 01:07:28.443456 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-05 01:07:28.443470 | orchestrator | 2025-05-05 01:07:28.443484 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-05 01:07:28.443498 | orchestrator | Monday 05 May 2025 01:04:14 +0000 (0:00:03.180) 0:00:05.443 ************ 2025-05-05 01:07:28.443512 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-05 01:07:28.443527 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-05 01:07:28.443541 | orchestrator | 2025-05-05 01:07:28.443556 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-05 01:07:28.443584 | orchestrator | Monday 05 May 2025 01:04:21 +0000 (0:00:06.090) 0:00:11.533 ************ 2025-05-05 01:07:28.443599 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:07:28.443614 | orchestrator | 2025-05-05 01:07:28.443628 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-05 01:07:28.443642 | orchestrator | Monday 05 May 2025 01:04:24 +0000 (0:00:03.490) 0:00:15.024 ************ 2025-05-05 01:07:28.443656 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:07:28.443670 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-05 01:07:28.443685 | orchestrator | 2025-05-05 01:07:28.443699 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-05 01:07:28.443715 | orchestrator | Monday 05 May 2025 01:04:28 +0000 (0:00:03.699) 0:00:18.723 ************ 2025-05-05 01:07:28.443732 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:07:28.443749 | orchestrator | 2025-05-05 01:07:28.443764 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-05 01:07:28.443780 | orchestrator | Monday 05 May 2025 01:04:31 +0000 (0:00:03.247) 0:00:21.971 ************ 2025-05-05 01:07:28.443796 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-05 01:07:28.443812 | orchestrator | 2025-05-05 01:07:28.443827 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-05 01:07:28.443843 | orchestrator | Monday 05 May 2025 01:04:35 +0000 (0:00:04.074) 0:00:26.045 ************ 2025-05-05 01:07:28.443905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.443940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.443972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.444008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.444038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.444073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.444101 | orchestrator | 2025-05-05 01:07:28.444116 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-05 01:07:28.444131 | orchestrator | Monday 05 May 2025 01:04:39 +0000 (0:00:03.813) 0:00:29.858 ************ 2025-05-05 01:07:28.444150 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:07:28.444166 | orchestrator | 2025-05-05 01:07:28.444180 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-05 01:07:28.444194 | orchestrator | Monday 05 May 2025 01:04:39 +0000 (0:00:00.549) 0:00:30.408 ************ 2025-05-05 01:07:28.444208 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.444223 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:07:28.444237 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:07:28.444251 | orchestrator | 2025-05-05 01:07:28.444265 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-05 01:07:28.444306 | orchestrator | Monday 05 May 2025 01:04:46 +0000 (0:00:06.959) 0:00:37.367 ************ 2025-05-05 01:07:28.444321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:28.444335 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:28.444350 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:28.444364 | orchestrator | 2025-05-05 01:07:28.444378 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-05 01:07:28.444392 | orchestrator | Monday 05 May 2025 01:04:48 +0000 (0:00:01.529) 0:00:38.897 ************ 2025-05-05 01:07:28.444406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:28.444421 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:28.444442 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:28.444456 | orchestrator | 2025-05-05 01:07:28.444470 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-05 01:07:28.444484 | orchestrator | Monday 05 May 2025 01:04:49 +0000 (0:00:01.136) 0:00:40.034 ************ 2025-05-05 01:07:28.444498 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:07:28.444513 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:07:28.444527 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:07:28.444541 | orchestrator | 2025-05-05 01:07:28.444555 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-05 01:07:28.444569 | orchestrator | Monday 05 May 2025 01:04:50 +0000 (0:00:00.588) 0:00:40.622 ************ 2025-05-05 01:07:28.444583 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.444598 | orchestrator | 2025-05-05 01:07:28.444612 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-05 01:07:28.444625 | orchestrator | Monday 05 May 2025 01:04:50 +0000 (0:00:00.248) 0:00:40.870 ************ 2025-05-05 01:07:28.444639 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.444654 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.444668 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.444682 | orchestrator | 2025-05-05 01:07:28.444696 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-05 01:07:28.444709 | orchestrator | Monday 05 May 2025 01:04:50 +0000 (0:00:00.269) 0:00:41.139 ************ 2025-05-05 01:07:28.444723 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:07:28.444738 | orchestrator | 2025-05-05 01:07:28.444752 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-05 01:07:28.444766 | orchestrator | Monday 05 May 2025 01:04:51 +0000 (0:00:00.746) 0:00:41.885 ************ 2025-05-05 01:07:28.444789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.444818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.444849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.444866 | orchestrator | 2025-05-05 01:07:28.444880 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-05 01:07:28.444894 | orchestrator | Monday 05 May 2025 01:04:56 +0000 (0:00:04.595) 0:00:46.481 ************ 2025-05-05 01:07:28.444909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 01:07:28.444947 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.444969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 01:07:28.444985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 01:07:28.445017 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.445032 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.445046 | orchestrator | 2025-05-05 01:07:28.445061 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-05 01:07:28.445075 | orchestrator | Monday 05 May 2025 01:05:00 +0000 (0:00:04.396) 0:00:50.877 ************ 2025-05-05 01:07:28.445097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 01:07:28.445113 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.445127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 01:07:28.445161 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.445176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-05 01:07:28.445191 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.445206 | orchestrator | 2025-05-05 01:07:28.445220 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-05 01:07:28.445239 | orchestrator | Monday 05 May 2025 01:05:09 +0000 (0:00:08.929) 0:00:59.807 ************ 2025-05-05 01:07:28.445254 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.445303 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.445321 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.445335 | orchestrator | 2025-05-05 01:07:28.445356 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-05 01:07:28.445371 | orchestrator | Monday 05 May 2025 01:05:14 +0000 (0:00:05.305) 0:01:05.113 ************ 2025-05-05 01:07:28.445386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.445422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.445447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.445471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.445506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.445522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.445554 | orchestrator | 2025-05-05 01:07:28.445569 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-05 01:07:28.445583 | orchestrator | Monday 05 May 2025 01:05:19 +0000 (0:00:04.454) 0:01:09.568 ************ 2025-05-05 01:07:28.445597 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:07:28.445611 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.445625 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:07:28.445639 | orchestrator | 2025-05-05 01:07:28.445654 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-05 01:07:28.445668 | orchestrator | Monday 05 May 2025 01:05:30 +0000 (0:00:11.058) 0:01:20.626 ************ 2025-05-05 01:07:28.445682 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.445696 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.445711 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.445725 | orchestrator | 2025-05-05 01:07:28.445739 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-05 01:07:28.445753 | orchestrator | Monday 05 May 2025 01:05:41 +0000 (0:00:10.858) 0:01:31.484 ************ 2025-05-05 01:07:28.445767 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.445781 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.445795 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.445809 | orchestrator | 2025-05-05 01:07:28.445823 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-05 01:07:28.445838 | orchestrator | Monday 05 May 2025 01:05:51 +0000 (0:00:10.397) 0:01:41.881 ************ 2025-05-05 01:07:28.445852 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.445866 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.445885 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.445900 | orchestrator | 2025-05-05 01:07:28.445914 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-05 01:07:28.445928 | orchestrator | Monday 05 May 2025 01:06:03 +0000 (0:00:11.924) 0:01:53.806 ************ 2025-05-05 01:07:28.445942 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.445962 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.445977 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.445991 | orchestrator | 2025-05-05 01:07:28.446005 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-05 01:07:28.446079 | orchestrator | Monday 05 May 2025 01:06:09 +0000 (0:00:05.775) 0:01:59.582 ************ 2025-05-05 01:07:28.446097 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.446112 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.446126 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.446140 | orchestrator | 2025-05-05 01:07:28.446154 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-05 01:07:28.446168 | orchestrator | Monday 05 May 2025 01:06:09 +0000 (0:00:00.327) 0:01:59.909 ************ 2025-05-05 01:07:28.446182 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-05 01:07:28.446196 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.446210 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-05 01:07:28.446224 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.446238 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-05 01:07:28.446252 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.446266 | orchestrator | 2025-05-05 01:07:28.446306 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-05 01:07:28.446321 | orchestrator | Monday 05 May 2025 01:06:13 +0000 (0:00:03.691) 0:02:03.601 ************ 2025-05-05 01:07:28.446336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.446361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.446401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.446435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.446459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-05 01:07:28.446475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-05 01:07:28.446507 | orchestrator | 2025-05-05 01:07:28.446522 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-05 01:07:28.446536 | orchestrator | Monday 05 May 2025 01:06:17 +0000 (0:00:04.670) 0:02:08.271 ************ 2025-05-05 01:07:28.446550 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:28.446565 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:28.446579 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:28.446593 | orchestrator | 2025-05-05 01:07:28.446612 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-05 01:07:28.446627 | orchestrator | Monday 05 May 2025 01:06:18 +0000 (0:00:00.408) 0:02:08.679 ************ 2025-05-05 01:07:28.446641 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.446655 | orchestrator | 2025-05-05 01:07:28.446669 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-05 01:07:28.446683 | orchestrator | Monday 05 May 2025 01:06:20 +0000 (0:00:02.205) 0:02:10.885 ************ 2025-05-05 01:07:28.446697 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.446711 | orchestrator | 2025-05-05 01:07:28.446726 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-05 01:07:28.446740 | orchestrator | Monday 05 May 2025 01:06:22 +0000 (0:00:02.349) 0:02:13.234 ************ 2025-05-05 01:07:28.446754 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.446768 | orchestrator | 2025-05-05 01:07:28.446781 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-05 01:07:28.446796 | orchestrator | Monday 05 May 2025 01:06:24 +0000 (0:00:02.109) 0:02:15.344 ************ 2025-05-05 01:07:28.446810 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.446824 | orchestrator | 2025-05-05 01:07:28.446838 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-05 01:07:28.446851 | orchestrator | Monday 05 May 2025 01:06:49 +0000 (0:00:24.981) 0:02:40.325 ************ 2025-05-05 01:07:28.446865 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.446879 | orchestrator | 2025-05-05 01:07:28.446893 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-05 01:07:28.446913 | orchestrator | Monday 05 May 2025 01:06:52 +0000 (0:00:02.368) 0:02:42.694 ************ 2025-05-05 01:07:28.446928 | orchestrator | 2025-05-05 01:07:28.446942 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-05 01:07:28.446956 | orchestrator | Monday 05 May 2025 01:06:52 +0000 (0:00:00.077) 0:02:42.771 ************ 2025-05-05 01:07:28.446970 | orchestrator | 2025-05-05 01:07:28.446984 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-05 01:07:28.446998 | orchestrator | Monday 05 May 2025 01:06:52 +0000 (0:00:00.055) 0:02:42.826 ************ 2025-05-05 01:07:28.447012 | orchestrator | 2025-05-05 01:07:28.447025 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-05 01:07:28.447039 | orchestrator | Monday 05 May 2025 01:06:52 +0000 (0:00:00.189) 0:02:43.016 ************ 2025-05-05 01:07:28.447053 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:28.447067 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:07:28.447081 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:07:28.447095 | orchestrator | 2025-05-05 01:07:28.447109 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:07:28.447124 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-05 01:07:28.447139 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-05 01:07:28.447153 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-05 01:07:28.447176 | orchestrator | 2025-05-05 01:07:28.447190 | orchestrator | 2025-05-05 01:07:28.447205 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:07:28.447219 | orchestrator | Monday 05 May 2025 01:07:26 +0000 (0:00:33.727) 0:03:16.744 ************ 2025-05-05 01:07:28.447233 | orchestrator | =============================================================================== 2025-05-05 01:07:28.447247 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.73s 2025-05-05 01:07:28.447261 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.98s 2025-05-05 01:07:28.447303 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 11.92s 2025-05-05 01:07:28.447329 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 11.06s 2025-05-05 01:07:28.447351 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 10.86s 2025-05-05 01:07:28.447370 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 10.40s 2025-05-05 01:07:28.447384 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 8.93s 2025-05-05 01:07:28.447398 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.96s 2025-05-05 01:07:28.447412 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.09s 2025-05-05 01:07:28.447426 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.78s 2025-05-05 01:07:28.447440 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.30s 2025-05-05 01:07:28.447454 | orchestrator | glance : Check glance containers ---------------------------------------- 4.67s 2025-05-05 01:07:28.447469 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.60s 2025-05-05 01:07:28.447483 | orchestrator | glance : Copying over config.json files for services -------------------- 4.45s 2025-05-05 01:07:28.447497 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.40s 2025-05-05 01:07:28.447511 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.07s 2025-05-05 01:07:28.447525 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.81s 2025-05-05 01:07:28.447539 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.70s 2025-05-05 01:07:28.447553 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.69s 2025-05-05 01:07:28.447573 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.49s 2025-05-05 01:07:31.480913 | orchestrator | 2025-05-05 01:07:28 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:31.481027 | orchestrator | 2025-05-05 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:31.481077 | orchestrator | 2025-05-05 01:07:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:31.482351 | orchestrator | 2025-05-05 01:07:31 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:31.485329 | orchestrator | 2025-05-05 01:07:31 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state STARTED 2025-05-05 01:07:31.486935 | orchestrator | 2025-05-05 01:07:31 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:31.488492 | orchestrator | 2025-05-05 01:07:31 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:34.532496 | orchestrator | 2025-05-05 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:34.532645 | orchestrator | 2025-05-05 01:07:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:34.533640 | orchestrator | 2025-05-05 01:07:34 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:34.536460 | orchestrator | 2025-05-05 01:07:34 | INFO  | Task 562eced3-8396-4743-8ba0-44009218bd53 is in state SUCCESS 2025-05-05 01:07:34.537738 | orchestrator | 2025-05-05 01:07:34.537991 | orchestrator | 2025-05-05 01:07:34.538163 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:07:34.538598 | orchestrator | 2025-05-05 01:07:34.538624 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:07:34.538640 | orchestrator | Monday 05 May 2025 01:04:27 +0000 (0:00:00.378) 0:00:00.378 ************ 2025-05-05 01:07:34.538654 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:07:34.538712 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:07:34.538728 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:07:34.538742 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:07:34.538756 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:07:34.538800 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:07:34.539102 | orchestrator | 2025-05-05 01:07:34.539119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:07:34.539133 | orchestrator | Monday 05 May 2025 01:04:27 +0000 (0:00:00.451) 0:00:00.829 ************ 2025-05-05 01:07:34.539149 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-05 01:07:34.539163 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-05 01:07:34.539178 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-05 01:07:34.539192 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-05 01:07:34.539207 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-05 01:07:34.539383 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-05 01:07:34.539404 | orchestrator | 2025-05-05 01:07:34.539453 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-05 01:07:34.539855 | orchestrator | 2025-05-05 01:07:34.539875 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-05 01:07:34.539889 | orchestrator | Monday 05 May 2025 01:04:28 +0000 (0:00:00.572) 0:00:01.401 ************ 2025-05-05 01:07:34.539904 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:07:34.539920 | orchestrator | 2025-05-05 01:07:34.539935 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-05 01:07:34.539949 | orchestrator | Monday 05 May 2025 01:04:29 +0000 (0:00:01.668) 0:00:03.070 ************ 2025-05-05 01:07:34.539965 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-05 01:07:34.539979 | orchestrator | 2025-05-05 01:07:34.539993 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-05 01:07:34.540008 | orchestrator | Monday 05 May 2025 01:04:32 +0000 (0:00:03.260) 0:00:06.331 ************ 2025-05-05 01:07:34.540022 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-05 01:07:34.540036 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-05 01:07:34.540050 | orchestrator | 2025-05-05 01:07:34.540065 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-05 01:07:34.540079 | orchestrator | Monday 05 May 2025 01:04:39 +0000 (0:00:06.472) 0:00:12.803 ************ 2025-05-05 01:07:34.540093 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:07:34.540107 | orchestrator | 2025-05-05 01:07:34.540121 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-05 01:07:34.540136 | orchestrator | Monday 05 May 2025 01:04:42 +0000 (0:00:03.406) 0:00:16.210 ************ 2025-05-05 01:07:34.540150 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:07:34.540165 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-05 01:07:34.540179 | orchestrator | 2025-05-05 01:07:34.540193 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-05 01:07:34.540207 | orchestrator | Monday 05 May 2025 01:04:46 +0000 (0:00:03.886) 0:00:20.097 ************ 2025-05-05 01:07:34.540244 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:07:34.540305 | orchestrator | 2025-05-05 01:07:34.540321 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-05 01:07:34.540336 | orchestrator | Monday 05 May 2025 01:04:49 +0000 (0:00:03.043) 0:00:23.140 ************ 2025-05-05 01:07:34.540351 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-05 01:07:34.540365 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-05 01:07:34.540379 | orchestrator | 2025-05-05 01:07:34.540393 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-05 01:07:34.540407 | orchestrator | Monday 05 May 2025 01:04:58 +0000 (0:00:08.419) 0:00:31.560 ************ 2025-05-05 01:07:34.540467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.540490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.540510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.540527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.540545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.540572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.540619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.540640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.540657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.540685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.540703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.540746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.540766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.540823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.540851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.540867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.540910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.540939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.540956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.540970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.540993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.541008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.541049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.541077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.541093 | orchestrator | 2025-05-05 01:07:34.541108 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-05 01:07:34.541123 | orchestrator | Monday 05 May 2025 01:05:01 +0000 (0:00:02.804) 0:00:34.364 ************ 2025-05-05 01:07:34.541137 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.541151 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.541172 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.541187 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:07:34.541201 | orchestrator | 2025-05-05 01:07:34.541215 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-05 01:07:34.541229 | orchestrator | Monday 05 May 2025 01:05:03 +0000 (0:00:02.554) 0:00:36.919 ************ 2025-05-05 01:07:34.541243 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-05 01:07:34.541257 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-05 01:07:34.541328 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-05 01:07:34.541343 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-05 01:07:34.541357 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-05 01:07:34.541371 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-05 01:07:34.541385 | orchestrator | 2025-05-05 01:07:34.541399 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-05 01:07:34.541414 | orchestrator | Monday 05 May 2025 01:05:08 +0000 (0:00:04.538) 0:00:41.457 ************ 2025-05-05 01:07:34.541429 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-05 01:07:34.541445 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-05 01:07:34.541492 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-05 01:07:34.541510 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-05 01:07:34.541534 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-05 01:07:34.541561 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-05 01:07:34.541578 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-05 01:07:34.541621 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-05 01:07:34.541645 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-05 01:07:34.541661 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-05 01:07:34.541677 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-05 01:07:34.541716 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-05 01:07:34.541733 | orchestrator | 2025-05-05 01:07:34.541748 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-05 01:07:34.541763 | orchestrator | Monday 05 May 2025 01:05:13 +0000 (0:00:05.268) 0:00:46.726 ************ 2025-05-05 01:07:34.541777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:34.541792 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:34.541804 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-05 01:07:34.541817 | orchestrator | 2025-05-05 01:07:34.541830 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-05 01:07:34.541850 | orchestrator | Monday 05 May 2025 01:05:15 +0000 (0:00:02.339) 0:00:49.066 ************ 2025-05-05 01:07:34.541862 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-05 01:07:34.541875 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-05 01:07:34.541888 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-05 01:07:34.541911 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-05 01:07:34.541924 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-05 01:07:34.541937 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-05 01:07:34.541949 | orchestrator | 2025-05-05 01:07:34.541961 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-05 01:07:34.541974 | orchestrator | Monday 05 May 2025 01:05:18 +0000 (0:00:03.087) 0:00:52.153 ************ 2025-05-05 01:07:34.541987 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-05 01:07:34.541999 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-05 01:07:34.542012 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-05 01:07:34.542076 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-05 01:07:34.542090 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-05 01:07:34.542103 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-05 01:07:34.542116 | orchestrator | 2025-05-05 01:07:34.542129 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-05 01:07:34.542141 | orchestrator | Monday 05 May 2025 01:05:19 +0000 (0:00:01.030) 0:00:53.184 ************ 2025-05-05 01:07:34.542154 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.542166 | orchestrator | 2025-05-05 01:07:34.542179 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-05 01:07:34.542191 | orchestrator | Monday 05 May 2025 01:05:19 +0000 (0:00:00.100) 0:00:53.284 ************ 2025-05-05 01:07:34.542204 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.542216 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.542229 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.542242 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:07:34.542254 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:07:34.542281 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:07:34.542295 | orchestrator | 2025-05-05 01:07:34.542308 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-05 01:07:34.542320 | orchestrator | Monday 05 May 2025 01:05:21 +0000 (0:00:01.279) 0:00:54.564 ************ 2025-05-05 01:07:34.542335 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:07:34.542349 | orchestrator | 2025-05-05 01:07:34.542362 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-05 01:07:34.542374 | orchestrator | Monday 05 May 2025 01:05:23 +0000 (0:00:02.620) 0:00:57.185 ************ 2025-05-05 01:07:34.542387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.542452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.542469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.542483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.542663 | orchestrator | 2025-05-05 01:07:34.542676 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-05 01:07:34.542689 | orchestrator | Monday 05 May 2025 01:05:27 +0000 (0:00:03.719) 0:01:00.904 ************ 2025-05-05 01:07:34.542725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.542751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.542765 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.542778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.542791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.542804 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:07:34.542823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.542860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.542875 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.542888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.542911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.542926 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.542939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.542959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.542973 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:07:34.543018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543048 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:07:34.543060 | orchestrator | 2025-05-05 01:07:34.543073 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-05 01:07:34.543086 | orchestrator | Monday 05 May 2025 01:05:29 +0000 (0:00:01.704) 0:01:02.608 ************ 2025-05-05 01:07:34.543099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.543112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543132 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.543145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.543183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543198 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.543211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.543233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543247 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.543260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543314 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:07:34.543366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543397 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:07:34.543410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543443 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:07:34.543456 | orchestrator | 2025-05-05 01:07:34.543469 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-05 01:07:34.543481 | orchestrator | Monday 05 May 2025 01:05:30 +0000 (0:00:01.616) 0:01:04.225 ************ 2025-05-05 01:07:34.543494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.543532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.543572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.543606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.543653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.543668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.543702 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.543716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.543761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.543858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.543912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.543982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.543998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544125 | orchestrator | 2025-05-05 01:07:34.544138 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-05 01:07:34.544150 | orchestrator | Monday 05 May 2025 01:05:34 +0000 (0:00:03.471) 0:01:07.696 ************ 2025-05-05 01:07:34.544163 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-05 01:07:34.544182 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:07:34.544195 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-05 01:07:34.544505 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:07:34.544538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-05 01:07:34.544553 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-05 01:07:34.544568 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:07:34.544582 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-05 01:07:34.544597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-05 01:07:34.544611 | orchestrator | 2025-05-05 01:07:34.544625 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-05 01:07:34.544638 | orchestrator | Monday 05 May 2025 01:05:38 +0000 (0:00:03.881) 0:01:11.577 ************ 2025-05-05 01:07:34.544650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.544662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.544695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.544726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.544751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.544769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.544786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.544953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.544975 | orchestrator | 2025-05-05 01:07:34.544989 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-05 01:07:34.545000 | orchestrator | Monday 05 May 2025 01:05:48 +0000 (0:00:10.307) 0:01:21.885 ************ 2025-05-05 01:07:34.545015 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.545026 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.545036 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.545047 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:07:34.545057 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:07:34.545067 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:07:34.545077 | orchestrator | 2025-05-05 01:07:34.545088 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-05 01:07:34.545098 | orchestrator | Monday 05 May 2025 01:05:55 +0000 (0:00:06.662) 0:01:28.547 ************ 2025-05-05 01:07:34.545108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545152 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.545168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545216 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.545227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545301 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.545311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545366 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:07:34.545377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545426 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:07:34.545441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545485 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:07:34.545495 | orchestrator | 2025-05-05 01:07:34.545505 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-05 01:07:34.545520 | orchestrator | Monday 05 May 2025 01:05:58 +0000 (0:00:03.409) 0:01:31.956 ************ 2025-05-05 01:07:34.545531 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.545542 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.545552 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.545563 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:07:34.545573 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:07:34.545583 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:07:34.545594 | orchestrator | 2025-05-05 01:07:34.545604 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-05 01:07:34.545614 | orchestrator | Monday 05 May 2025 01:06:00 +0000 (0:00:01.928) 0:01:33.885 ************ 2025-05-05 01:07:34.545629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-05 01:07:34.545693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.545705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.545727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-05 01:07:34.545742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-05 01:07:34.545911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-05 01:07:34.545932 | orchestrator | 2025-05-05 01:07:34.545943 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-05 01:07:34.545957 | orchestrator | Monday 05 May 2025 01:06:04 +0000 (0:00:03.483) 0:01:37.369 ************ 2025-05-05 01:07:34.545968 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.545978 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:07:34.545989 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:07:34.546004 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:07:34.546139 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:07:34.546156 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:07:34.546167 | orchestrator | 2025-05-05 01:07:34.546177 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-05 01:07:34.546188 | orchestrator | Monday 05 May 2025 01:06:04 +0000 (0:00:00.875) 0:01:38.244 ************ 2025-05-05 01:07:34.546198 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:34.546208 | orchestrator | 2025-05-05 01:07:34.546219 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-05 01:07:34.546229 | orchestrator | Monday 05 May 2025 01:06:07 +0000 (0:00:02.601) 0:01:40.846 ************ 2025-05-05 01:07:34.546239 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:34.546249 | orchestrator | 2025-05-05 01:07:34.546259 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-05 01:07:34.546288 | orchestrator | Monday 05 May 2025 01:06:09 +0000 (0:00:02.235) 0:01:43.081 ************ 2025-05-05 01:07:34.546299 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:34.546310 | orchestrator | 2025-05-05 01:07:34.546320 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-05 01:07:34.546330 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:20.678) 0:02:03.760 ************ 2025-05-05 01:07:34.546341 | orchestrator | 2025-05-05 01:07:34.546351 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-05 01:07:34.546361 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:00.053) 0:02:03.814 ************ 2025-05-05 01:07:34.546371 | orchestrator | 2025-05-05 01:07:34.546381 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-05 01:07:34.546391 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:00.150) 0:02:03.965 ************ 2025-05-05 01:07:34.546402 | orchestrator | 2025-05-05 01:07:34.546412 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-05 01:07:34.546422 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:00.047) 0:02:04.013 ************ 2025-05-05 01:07:34.546433 | orchestrator | 2025-05-05 01:07:34.546443 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-05 01:07:34.546453 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:00.048) 0:02:04.061 ************ 2025-05-05 01:07:34.546464 | orchestrator | 2025-05-05 01:07:34.546481 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-05 01:07:34.546498 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:00.047) 0:02:04.109 ************ 2025-05-05 01:07:34.546514 | orchestrator | 2025-05-05 01:07:34.546529 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-05 01:07:34.546546 | orchestrator | Monday 05 May 2025 01:06:30 +0000 (0:00:00.149) 0:02:04.258 ************ 2025-05-05 01:07:34.546561 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:34.546577 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:07:34.546593 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:07:34.546611 | orchestrator | 2025-05-05 01:07:34.546628 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-05 01:07:34.546645 | orchestrator | Monday 05 May 2025 01:06:48 +0000 (0:00:17.959) 0:02:22.217 ************ 2025-05-05 01:07:34.546662 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:07:34.546673 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:07:34.546684 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:07:34.546694 | orchestrator | 2025-05-05 01:07:34.546705 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-05 01:07:34.546723 | orchestrator | Monday 05 May 2025 01:06:54 +0000 (0:00:05.769) 0:02:27.987 ************ 2025-05-05 01:07:34.547697 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:07:34.547721 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:07:34.547738 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:07:34.547754 | orchestrator | 2025-05-05 01:07:34.547770 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-05 01:07:34.547804 | orchestrator | Monday 05 May 2025 01:07:20 +0000 (0:00:25.534) 0:02:53.522 ************ 2025-05-05 01:07:34.547821 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:07:34.547838 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:07:34.547855 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:07:34.547871 | orchestrator | 2025-05-05 01:07:34.547970 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-05 01:07:34.547988 | orchestrator | Monday 05 May 2025 01:07:31 +0000 (0:00:11.097) 0:03:04.619 ************ 2025-05-05 01:07:34.548004 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:07:34.548019 | orchestrator | 2025-05-05 01:07:34.548034 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:07:34.548051 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-05 01:07:34.548068 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-05 01:07:34.548084 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-05 01:07:34.548099 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:07:34.548114 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:07:34.548129 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:07:34.548144 | orchestrator | 2025-05-05 01:07:34.548159 | orchestrator | 2025-05-05 01:07:34.548176 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:07:34.548186 | orchestrator | Monday 05 May 2025 01:07:31 +0000 (0:00:00.542) 0:03:05.162 ************ 2025-05-05 01:07:34.548204 | orchestrator | =============================================================================== 2025-05-05 01:07:34.548219 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.53s 2025-05-05 01:07:34.548234 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.68s 2025-05-05 01:07:34.548250 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.96s 2025-05-05 01:07:34.548286 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.10s 2025-05-05 01:07:34.548296 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.31s 2025-05-05 01:07:34.548305 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.42s 2025-05-05 01:07:34.548314 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 6.66s 2025-05-05 01:07:34.548323 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.47s 2025-05-05 01:07:34.548332 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.77s 2025-05-05 01:07:34.548341 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.27s 2025-05-05 01:07:34.548349 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.54s 2025-05-05 01:07:34.548358 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.89s 2025-05-05 01:07:34.548367 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.88s 2025-05-05 01:07:34.548375 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.72s 2025-05-05 01:07:34.548384 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.48s 2025-05-05 01:07:34.548393 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.47s 2025-05-05 01:07:34.548409 | orchestrator | cinder : Copying over existing policy file ------------------------------ 3.41s 2025-05-05 01:07:34.548418 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.41s 2025-05-05 01:07:34.548427 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.26s 2025-05-05 01:07:34.548436 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.09s 2025-05-05 01:07:34.548445 | orchestrator | 2025-05-05 01:07:34 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:34.548454 | orchestrator | 2025-05-05 01:07:34 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:34.548468 | orchestrator | 2025-05-05 01:07:34 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:37.597668 | orchestrator | 2025-05-05 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:37.597812 | orchestrator | 2025-05-05 01:07:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:37.598399 | orchestrator | 2025-05-05 01:07:37 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:37.600785 | orchestrator | 2025-05-05 01:07:37 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:37.602427 | orchestrator | 2025-05-05 01:07:37 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:37.603604 | orchestrator | 2025-05-05 01:07:37 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:40.655891 | orchestrator | 2025-05-05 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:40.656052 | orchestrator | 2025-05-05 01:07:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:40.657171 | orchestrator | 2025-05-05 01:07:40 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:40.658247 | orchestrator | 2025-05-05 01:07:40 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:40.658413 | orchestrator | 2025-05-05 01:07:40 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:40.659021 | orchestrator | 2025-05-05 01:07:40 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:43.705380 | orchestrator | 2025-05-05 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:43.705540 | orchestrator | 2025-05-05 01:07:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:43.707321 | orchestrator | 2025-05-05 01:07:43 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:43.708999 | orchestrator | 2025-05-05 01:07:43 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:43.711445 | orchestrator | 2025-05-05 01:07:43 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:43.713322 | orchestrator | 2025-05-05 01:07:43 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:46.762570 | orchestrator | 2025-05-05 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:46.762719 | orchestrator | 2025-05-05 01:07:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:46.763641 | orchestrator | 2025-05-05 01:07:46 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:46.765838 | orchestrator | 2025-05-05 01:07:46 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:46.767299 | orchestrator | 2025-05-05 01:07:46 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:46.769080 | orchestrator | 2025-05-05 01:07:46 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:49.819815 | orchestrator | 2025-05-05 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:49.819952 | orchestrator | 2025-05-05 01:07:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:49.821747 | orchestrator | 2025-05-05 01:07:49 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:49.823768 | orchestrator | 2025-05-05 01:07:49 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:49.826005 | orchestrator | 2025-05-05 01:07:49 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:49.827073 | orchestrator | 2025-05-05 01:07:49 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:52.871710 | orchestrator | 2025-05-05 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:52.871923 | orchestrator | 2025-05-05 01:07:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:52.871952 | orchestrator | 2025-05-05 01:07:52 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:52.873326 | orchestrator | 2025-05-05 01:07:52 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:52.873832 | orchestrator | 2025-05-05 01:07:52 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:52.874567 | orchestrator | 2025-05-05 01:07:52 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:55.926121 | orchestrator | 2025-05-05 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:55.926308 | orchestrator | 2025-05-05 01:07:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:55.927984 | orchestrator | 2025-05-05 01:07:55 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:55.929562 | orchestrator | 2025-05-05 01:07:55 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:55.930755 | orchestrator | 2025-05-05 01:07:55 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:55.932098 | orchestrator | 2025-05-05 01:07:55 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:07:58.981865 | orchestrator | 2025-05-05 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:07:58.982085 | orchestrator | 2025-05-05 01:07:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:07:58.984502 | orchestrator | 2025-05-05 01:07:58 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:07:58.986228 | orchestrator | 2025-05-05 01:07:58 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:07:58.989477 | orchestrator | 2025-05-05 01:07:58 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:07:58.990837 | orchestrator | 2025-05-05 01:07:58 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:02.048567 | orchestrator | 2025-05-05 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:02.048708 | orchestrator | 2025-05-05 01:08:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:02.049525 | orchestrator | 2025-05-05 01:08:02 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:02.051420 | orchestrator | 2025-05-05 01:08:02 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:08:02.052899 | orchestrator | 2025-05-05 01:08:02 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:02.054513 | orchestrator | 2025-05-05 01:08:02 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:02.054889 | orchestrator | 2025-05-05 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:05.096533 | orchestrator | 2025-05-05 01:08:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:05.097769 | orchestrator | 2025-05-05 01:08:05 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:05.097837 | orchestrator | 2025-05-05 01:08:05 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:08:05.098927 | orchestrator | 2025-05-05 01:08:05 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:05.100369 | orchestrator | 2025-05-05 01:08:05 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:08.145905 | orchestrator | 2025-05-05 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:08.146093 | orchestrator | 2025-05-05 01:08:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:08.148034 | orchestrator | 2025-05-05 01:08:08 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:08.150531 | orchestrator | 2025-05-05 01:08:08 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:08:08.153400 | orchestrator | 2025-05-05 01:08:08 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:08.155294 | orchestrator | 2025-05-05 01:08:08 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:11.209374 | orchestrator | 2025-05-05 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:11.209519 | orchestrator | 2025-05-05 01:08:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:11.209926 | orchestrator | 2025-05-05 01:08:11 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:11.211922 | orchestrator | 2025-05-05 01:08:11 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:08:11.212635 | orchestrator | 2025-05-05 01:08:11 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:11.213726 | orchestrator | 2025-05-05 01:08:11 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:14.257030 | orchestrator | 2025-05-05 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:14.257164 | orchestrator | 2025-05-05 01:08:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:14.258274 | orchestrator | 2025-05-05 01:08:14 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:14.258922 | orchestrator | 2025-05-05 01:08:14 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:08:14.261261 | orchestrator | 2025-05-05 01:08:14 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:14.262158 | orchestrator | 2025-05-05 01:08:14 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:17.302347 | orchestrator | 2025-05-05 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:17.302537 | orchestrator | 2025-05-05 01:08:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:17.302931 | orchestrator | 2025-05-05 01:08:17 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:17.304106 | orchestrator | 2025-05-05 01:08:17 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:08:17.305122 | orchestrator | 2025-05-05 01:08:17 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:17.306322 | orchestrator | 2025-05-05 01:08:17 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:17.306627 | orchestrator | 2025-05-05 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:20.359414 | orchestrator | 2025-05-05 01:08:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:20.359981 | orchestrator | 2025-05-05 01:08:20 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:20.360822 | orchestrator | 2025-05-05 01:08:20 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state STARTED 2025-05-05 01:08:20.361911 | orchestrator | 2025-05-05 01:08:20 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:20.362795 | orchestrator | 2025-05-05 01:08:20 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:23.416980 | orchestrator | 2025-05-05 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:23.417125 | orchestrator | 2025-05-05 01:08:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:23.418791 | orchestrator | 2025-05-05 01:08:23 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:23.420678 | orchestrator | 2025-05-05 01:08:23.420709 | orchestrator | 2025-05-05 01:08:23.420723 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:08:23.420738 | orchestrator | 2025-05-05 01:08:23.420753 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:08:23.420768 | orchestrator | Monday 05 May 2025 01:07:29 +0000 (0:00:00.349) 0:00:00.349 ************ 2025-05-05 01:08:23.420783 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:08:23.420799 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:08:23.420813 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:08:23.420827 | orchestrator | 2025-05-05 01:08:23.420841 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:08:23.420856 | orchestrator | Monday 05 May 2025 01:07:29 +0000 (0:00:00.350) 0:00:00.700 ************ 2025-05-05 01:08:23.420870 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-05 01:08:23.420884 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-05 01:08:23.420898 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-05 01:08:23.420912 | orchestrator | 2025-05-05 01:08:23.420927 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-05 01:08:23.420941 | orchestrator | 2025-05-05 01:08:23.420955 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-05 01:08:23.420969 | orchestrator | Monday 05 May 2025 01:07:30 +0000 (0:00:00.360) 0:00:01.060 ************ 2025-05-05 01:08:23.420984 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:08:23.420999 | orchestrator | 2025-05-05 01:08:23.421013 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-05 01:08:23.421028 | orchestrator | Monday 05 May 2025 01:07:31 +0000 (0:00:00.753) 0:00:01.814 ************ 2025-05-05 01:08:23.421043 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-05 01:08:23.421084 | orchestrator | 2025-05-05 01:08:23.421114 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-05 01:08:23.421129 | orchestrator | Monday 05 May 2025 01:07:34 +0000 (0:00:03.317) 0:00:05.131 ************ 2025-05-05 01:08:23.421143 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-05 01:08:23.421158 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-05 01:08:23.421172 | orchestrator | 2025-05-05 01:08:23.421187 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-05 01:08:23.421201 | orchestrator | Monday 05 May 2025 01:07:40 +0000 (0:00:06.392) 0:00:11.524 ************ 2025-05-05 01:08:23.421282 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:08:23.421301 | orchestrator | 2025-05-05 01:08:23.421318 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-05 01:08:23.421334 | orchestrator | Monday 05 May 2025 01:07:44 +0000 (0:00:03.440) 0:00:14.965 ************ 2025-05-05 01:08:23.421350 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:08:23.421366 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-05 01:08:23.421382 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-05 01:08:23.421399 | orchestrator | 2025-05-05 01:08:23.421415 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-05 01:08:23.421430 | orchestrator | Monday 05 May 2025 01:07:52 +0000 (0:00:08.351) 0:00:23.316 ************ 2025-05-05 01:08:23.421446 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:08:23.421461 | orchestrator | 2025-05-05 01:08:23.421477 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-05 01:08:23.421597 | orchestrator | Monday 05 May 2025 01:07:55 +0000 (0:00:03.140) 0:00:26.456 ************ 2025-05-05 01:08:23.421615 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-05 01:08:23.421631 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-05 01:08:23.421647 | orchestrator | 2025-05-05 01:08:23.421662 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-05 01:08:23.421676 | orchestrator | Monday 05 May 2025 01:08:03 +0000 (0:00:07.422) 0:00:33.879 ************ 2025-05-05 01:08:23.421690 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-05 01:08:23.421704 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-05 01:08:23.421719 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-05 01:08:23.421733 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-05 01:08:23.421747 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-05 01:08:23.421762 | orchestrator | 2025-05-05 01:08:23.421776 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-05 01:08:23.421790 | orchestrator | Monday 05 May 2025 01:08:18 +0000 (0:00:15.350) 0:00:49.229 ************ 2025-05-05 01:08:23.421804 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:08:23.421819 | orchestrator | 2025-05-05 01:08:23.421833 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-05 01:08:23.421847 | orchestrator | Monday 05 May 2025 01:08:19 +0000 (0:00:00.923) 0:00:50.153 ************ 2025-05-05 01:08:23.421862 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-05-05 01:08:23.421908 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1746407300.7863169-6593-206953013547858/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1746407300.7863169-6593-206953013547858/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1746407300.7863169-6593-206953013547858/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_nova_flavor_payload_n7876dl9/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_n7876dl9/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_n7876dl9/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_n7876dl9/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-05 01:08:23.421941 | orchestrator | 2025-05-05 01:08:23.421956 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:08:23.421971 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-05 01:08:23.421987 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:08:23.422002 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:08:23.422070 | orchestrator | 2025-05-05 01:08:23.422091 | orchestrator | 2025-05-05 01:08:23.422105 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:08:23.422119 | orchestrator | Monday 05 May 2025 01:08:22 +0000 (0:00:03.176) 0:00:53.330 ************ 2025-05-05 01:08:23.422133 | orchestrator | =============================================================================== 2025-05-05 01:08:23.422155 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.35s 2025-05-05 01:08:23.422183 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.35s 2025-05-05 01:08:23.423851 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.42s 2025-05-05 01:08:23.423880 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.39s 2025-05-05 01:08:23.423895 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.44s 2025-05-05 01:08:23.423909 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.32s 2025-05-05 01:08:23.423923 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.18s 2025-05-05 01:08:23.423937 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.14s 2025-05-05 01:08:23.423951 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.92s 2025-05-05 01:08:23.423965 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.75s 2025-05-05 01:08:23.423978 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-05-05 01:08:23.423992 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-05 01:08:23.424014 | orchestrator | 2025-05-05 01:08:23 | INFO  | Task 37b3809c-bc2c-46c1-b9d4-eba41fc1b4c1 is in state SUCCESS 2025-05-05 01:08:23.424028 | orchestrator | 2025-05-05 01:08:23 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:23.424054 | orchestrator | 2025-05-05 01:08:23 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:26.471085 | orchestrator | 2025-05-05 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:26.471178 | orchestrator | 2025-05-05 01:08:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:26.473011 | orchestrator | 2025-05-05 01:08:26 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:26.475159 | orchestrator | 2025-05-05 01:08:26 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:26.477067 | orchestrator | 2025-05-05 01:08:26 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:29.525018 | orchestrator | 2025-05-05 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:29.525171 | orchestrator | 2025-05-05 01:08:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:29.526610 | orchestrator | 2025-05-05 01:08:29 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:29.528918 | orchestrator | 2025-05-05 01:08:29 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:29.531501 | orchestrator | 2025-05-05 01:08:29 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:32.580740 | orchestrator | 2025-05-05 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:32.580908 | orchestrator | 2025-05-05 01:08:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:32.582419 | orchestrator | 2025-05-05 01:08:32 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:32.584038 | orchestrator | 2025-05-05 01:08:32 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:32.585487 | orchestrator | 2025-05-05 01:08:32 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:35.632150 | orchestrator | 2025-05-05 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:35.632362 | orchestrator | 2025-05-05 01:08:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:35.633797 | orchestrator | 2025-05-05 01:08:35 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:35.635607 | orchestrator | 2025-05-05 01:08:35 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:35.637287 | orchestrator | 2025-05-05 01:08:35 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:38.687009 | orchestrator | 2025-05-05 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:38.687152 | orchestrator | 2025-05-05 01:08:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:38.688770 | orchestrator | 2025-05-05 01:08:38 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:38.691962 | orchestrator | 2025-05-05 01:08:38 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:38.694648 | orchestrator | 2025-05-05 01:08:38 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:41.738616 | orchestrator | 2025-05-05 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:41.738770 | orchestrator | 2025-05-05 01:08:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:41.741107 | orchestrator | 2025-05-05 01:08:41 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:41.745225 | orchestrator | 2025-05-05 01:08:41 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:41.747154 | orchestrator | 2025-05-05 01:08:41 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:44.800432 | orchestrator | 2025-05-05 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:44.800577 | orchestrator | 2025-05-05 01:08:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:44.803385 | orchestrator | 2025-05-05 01:08:44 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:44.805082 | orchestrator | 2025-05-05 01:08:44 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:44.806607 | orchestrator | 2025-05-05 01:08:44 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:47.867451 | orchestrator | 2025-05-05 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:47.867601 | orchestrator | 2025-05-05 01:08:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:47.870927 | orchestrator | 2025-05-05 01:08:47 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:47.872481 | orchestrator | 2025-05-05 01:08:47 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:47.875485 | orchestrator | 2025-05-05 01:08:47 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:50.927851 | orchestrator | 2025-05-05 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:50.927988 | orchestrator | 2025-05-05 01:08:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:50.929444 | orchestrator | 2025-05-05 01:08:50 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:50.930292 | orchestrator | 2025-05-05 01:08:50 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:50.931637 | orchestrator | 2025-05-05 01:08:50 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:50.931945 | orchestrator | 2025-05-05 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:53.996966 | orchestrator | 2025-05-05 01:08:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:53.998312 | orchestrator | 2025-05-05 01:08:53 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:54.000562 | orchestrator | 2025-05-05 01:08:54 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:54.003078 | orchestrator | 2025-05-05 01:08:54 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:08:57.059840 | orchestrator | 2025-05-05 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:08:57.059979 | orchestrator | 2025-05-05 01:08:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:08:57.060895 | orchestrator | 2025-05-05 01:08:57 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:08:57.062585 | orchestrator | 2025-05-05 01:08:57 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:08:57.064691 | orchestrator | 2025-05-05 01:08:57 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:00.121367 | orchestrator | 2025-05-05 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:00.121535 | orchestrator | 2025-05-05 01:09:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:00.121979 | orchestrator | 2025-05-05 01:09:00 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:00.122948 | orchestrator | 2025-05-05 01:09:00 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:00.123993 | orchestrator | 2025-05-05 01:09:00 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:00.124246 | orchestrator | 2025-05-05 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:03.172987 | orchestrator | 2025-05-05 01:09:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:03.175575 | orchestrator | 2025-05-05 01:09:03 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:06.225355 | orchestrator | 2025-05-05 01:09:03 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:06.225485 | orchestrator | 2025-05-05 01:09:03 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:06.225507 | orchestrator | 2025-05-05 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:06.225541 | orchestrator | 2025-05-05 01:09:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:06.226216 | orchestrator | 2025-05-05 01:09:06 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:06.227324 | orchestrator | 2025-05-05 01:09:06 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:06.228780 | orchestrator | 2025-05-05 01:09:06 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:09.281660 | orchestrator | 2025-05-05 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:09.281801 | orchestrator | 2025-05-05 01:09:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:09.283470 | orchestrator | 2025-05-05 01:09:09 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:09.286419 | orchestrator | 2025-05-05 01:09:09 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:09.288378 | orchestrator | 2025-05-05 01:09:09 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:09.288850 | orchestrator | 2025-05-05 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:12.332423 | orchestrator | 2025-05-05 01:09:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:12.334379 | orchestrator | 2025-05-05 01:09:12 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:12.335949 | orchestrator | 2025-05-05 01:09:12 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:12.335994 | orchestrator | 2025-05-05 01:09:12 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:15.381789 | orchestrator | 2025-05-05 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:15.381925 | orchestrator | 2025-05-05 01:09:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:15.382646 | orchestrator | 2025-05-05 01:09:15 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:15.384255 | orchestrator | 2025-05-05 01:09:15 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:15.385864 | orchestrator | 2025-05-05 01:09:15 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:18.431455 | orchestrator | 2025-05-05 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:18.431598 | orchestrator | 2025-05-05 01:09:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:18.432934 | orchestrator | 2025-05-05 01:09:18 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:18.434783 | orchestrator | 2025-05-05 01:09:18 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:18.436694 | orchestrator | 2025-05-05 01:09:18 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:18.436878 | orchestrator | 2025-05-05 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:21.486539 | orchestrator | 2025-05-05 01:09:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:21.487558 | orchestrator | 2025-05-05 01:09:21 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:21.488533 | orchestrator | 2025-05-05 01:09:21 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:21.489908 | orchestrator | 2025-05-05 01:09:21 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:24.545765 | orchestrator | 2025-05-05 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:24.545913 | orchestrator | 2025-05-05 01:09:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:24.547347 | orchestrator | 2025-05-05 01:09:24 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:24.549541 | orchestrator | 2025-05-05 01:09:24 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:24.551421 | orchestrator | 2025-05-05 01:09:24 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:27.603686 | orchestrator | 2025-05-05 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:27.603825 | orchestrator | 2025-05-05 01:09:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:27.605420 | orchestrator | 2025-05-05 01:09:27 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:27.607725 | orchestrator | 2025-05-05 01:09:27 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:27.610649 | orchestrator | 2025-05-05 01:09:27 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:30.660114 | orchestrator | 2025-05-05 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:30.660342 | orchestrator | 2025-05-05 01:09:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:30.661765 | orchestrator | 2025-05-05 01:09:30 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:30.663971 | orchestrator | 2025-05-05 01:09:30 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:30.666187 | orchestrator | 2025-05-05 01:09:30 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:33.709466 | orchestrator | 2025-05-05 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:33.709607 | orchestrator | 2025-05-05 01:09:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:33.711418 | orchestrator | 2025-05-05 01:09:33 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state STARTED 2025-05-05 01:09:33.713541 | orchestrator | 2025-05-05 01:09:33 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state STARTED 2025-05-05 01:09:33.716034 | orchestrator | 2025-05-05 01:09:33 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:36.771033 | orchestrator | 2025-05-05 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:36.771258 | orchestrator | 2025-05-05 01:09:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:36.772089 | orchestrator | 2025-05-05 01:09:36 | INFO  | Task 5dbb6385-445b-4bca-bbc4-f31d908b1a7c is in state SUCCESS 2025-05-05 01:09:36.776631 | orchestrator | 2025-05-05 01:09:36.776681 | orchestrator | 2025-05-05 01:09:36.776694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:09:36.776706 | orchestrator | 2025-05-05 01:09:36.776718 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:09:36.776729 | orchestrator | Monday 05 May 2025 01:06:55 +0000 (0:00:00.524) 0:00:00.526 ************ 2025-05-05 01:09:36.776741 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:09:36.776753 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:09:36.776764 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:09:36.776775 | orchestrator | 2025-05-05 01:09:36.776787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:09:36.776798 | orchestrator | Monday 05 May 2025 01:06:56 +0000 (0:00:00.780) 0:00:01.307 ************ 2025-05-05 01:09:36.776809 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-05 01:09:36.776821 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-05 01:09:36.776832 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-05 01:09:36.776844 | orchestrator | 2025-05-05 01:09:36.776855 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-05 01:09:36.776866 | orchestrator | 2025-05-05 01:09:36.776877 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-05 01:09:36.776935 | orchestrator | Monday 05 May 2025 01:06:57 +0000 (0:00:00.892) 0:00:02.200 ************ 2025-05-05 01:09:36.776946 | orchestrator | 2025-05-05 01:09:36.776956 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-05 01:09:36.776967 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:09:36.776997 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:09:36.777008 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:09:36.777019 | orchestrator | 2025-05-05 01:09:36.777083 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:09:36.777096 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:09:36.777108 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:09:36.777119 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:09:36.777169 | orchestrator | 2025-05-05 01:09:36.777180 | orchestrator | 2025-05-05 01:09:36.777190 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:09:36.777201 | orchestrator | Monday 05 May 2025 01:09:33 +0000 (0:02:36.316) 0:02:38.516 ************ 2025-05-05 01:09:36.777212 | orchestrator | =============================================================================== 2025-05-05 01:09:36.777269 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 156.32s 2025-05-05 01:09:36.777283 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2025-05-05 01:09:36.777295 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2025-05-05 01:09:36.777307 | orchestrator | 2025-05-05 01:09:36.777318 | orchestrator | 2025-05-05 01:09:36.777330 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:09:36.777341 | orchestrator | 2025-05-05 01:09:36.777353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:09:36.777365 | orchestrator | Monday 05 May 2025 01:07:34 +0000 (0:00:00.292) 0:00:00.292 ************ 2025-05-05 01:09:36.777377 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:09:36.777389 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:09:36.777415 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:09:36.777428 | orchestrator | 2025-05-05 01:09:36.777440 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:09:36.777452 | orchestrator | Monday 05 May 2025 01:07:35 +0000 (0:00:00.373) 0:00:00.666 ************ 2025-05-05 01:09:36.777464 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-05 01:09:36.777475 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-05 01:09:36.777486 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-05 01:09:36.777496 | orchestrator | 2025-05-05 01:09:36.777507 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-05 01:09:36.777517 | orchestrator | 2025-05-05 01:09:36.777528 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-05 01:09:36.777538 | orchestrator | Monday 05 May 2025 01:07:35 +0000 (0:00:00.298) 0:00:00.964 ************ 2025-05-05 01:09:36.777549 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:09:36.777560 | orchestrator | 2025-05-05 01:09:36.777570 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-05 01:09:36.777580 | orchestrator | Monday 05 May 2025 01:07:36 +0000 (0:00:00.670) 0:00:01.635 ************ 2025-05-05 01:09:36.777593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.777628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.777641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.777652 | orchestrator | 2025-05-05 01:09:36.777665 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-05 01:09:36.777681 | orchestrator | Monday 05 May 2025 01:07:37 +0000 (0:00:00.951) 0:00:02.586 ************ 2025-05-05 01:09:36.777736 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-05 01:09:36.777749 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-05 01:09:36.777760 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 01:09:36.777772 | orchestrator | 2025-05-05 01:09:36.777782 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-05 01:09:36.777831 | orchestrator | Monday 05 May 2025 01:07:37 +0000 (0:00:00.495) 0:00:03.081 ************ 2025-05-05 01:09:36.777843 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:09:36.777853 | orchestrator | 2025-05-05 01:09:36.777864 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-05 01:09:36.777880 | orchestrator | Monday 05 May 2025 01:07:38 +0000 (0:00:00.568) 0:00:03.650 ************ 2025-05-05 01:09:36.777891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.777904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.777921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.777940 | orchestrator | 2025-05-05 01:09:36.777951 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-05 01:09:36.777961 | orchestrator | Monday 05 May 2025 01:07:39 +0000 (0:00:01.540) 0:00:05.190 ************ 2025-05-05 01:09:36.777972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 01:09:36.777983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 01:09:36.777994 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:09:36.778004 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:09:36.778055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 01:09:36.778069 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:09:36.778080 | orchestrator | 2025-05-05 01:09:36.778090 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-05 01:09:36.778100 | orchestrator | Monday 05 May 2025 01:07:40 +0000 (0:00:00.528) 0:00:05.719 ************ 2025-05-05 01:09:36.778118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 01:09:36.778167 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:09:36.778193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 01:09:36.778212 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:09:36.778238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-05 01:09:36.778265 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:09:36.778284 | orchestrator | 2025-05-05 01:09:36.778300 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-05 01:09:36.778317 | orchestrator | Monday 05 May 2025 01:07:41 +0000 (0:00:00.662) 0:00:06.381 ************ 2025-05-05 01:09:36.778335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.778352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.778370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.778396 | orchestrator | 2025-05-05 01:09:36.778414 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-05 01:09:36.778431 | orchestrator | Monday 05 May 2025 01:07:42 +0000 (0:00:01.360) 0:00:07.742 ************ 2025-05-05 01:09:36.778450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.778477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.778495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.778513 | orchestrator | 2025-05-05 01:09:36.778530 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-05 01:09:36.778547 | orchestrator | Monday 05 May 2025 01:07:44 +0000 (0:00:01.643) 0:00:09.385 ************ 2025-05-05 01:09:36.778564 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:09:36.778581 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:09:36.778601 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:09:36.778618 | orchestrator | 2025-05-05 01:09:36.778636 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-05 01:09:36.778654 | orchestrator | Monday 05 May 2025 01:07:44 +0000 (0:00:00.307) 0:00:09.693 ************ 2025-05-05 01:09:36.778671 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-05 01:09:36.778689 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-05 01:09:36.778707 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-05 01:09:36.778725 | orchestrator | 2025-05-05 01:09:36.778743 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-05 01:09:36.778760 | orchestrator | Monday 05 May 2025 01:07:45 +0000 (0:00:01.445) 0:00:11.139 ************ 2025-05-05 01:09:36.778778 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-05 01:09:36.778796 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-05 01:09:36.778825 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-05 01:09:36.778844 | orchestrator | 2025-05-05 01:09:36.778863 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-05 01:09:36.778881 | orchestrator | Monday 05 May 2025 01:07:47 +0000 (0:00:01.430) 0:00:12.569 ************ 2025-05-05 01:09:36.778900 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 01:09:36.778912 | orchestrator | 2025-05-05 01:09:36.778923 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-05 01:09:36.778935 | orchestrator | Monday 05 May 2025 01:07:47 +0000 (0:00:00.430) 0:00:13.000 ************ 2025-05-05 01:09:36.778946 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-05 01:09:36.778964 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-05 01:09:36.778976 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:09:36.778988 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:09:36.778999 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:09:36.779010 | orchestrator | 2025-05-05 01:09:36.779022 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-05 01:09:36.779033 | orchestrator | Monday 05 May 2025 01:07:48 +0000 (0:00:00.856) 0:00:13.856 ************ 2025-05-05 01:09:36.779044 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:09:36.779055 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:09:36.779067 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:09:36.779078 | orchestrator | 2025-05-05 01:09:36.779089 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-05 01:09:36.779151 | orchestrator | Monday 05 May 2025 01:07:48 +0000 (0:00:00.415) 0:00:14.271 ************ 2025-05-05 01:09:36.779164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1113985, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2444603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1113985, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2444603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1113985, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2444603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1113957, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2394602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1113957, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2394602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1113957, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2394602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1113943, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2374601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1113943, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2374601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1113943, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2374601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1113972, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2414603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1113972, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2414603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1113972, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2414603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1113835, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2044597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1113835, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2044597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1113835, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2044597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1113944, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2374601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1113944, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2374601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1113944, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2374601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1113968, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2404604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1113968, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2404604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg':2025-05-05 01:09:36 | INFO  | Task 16581183-9871-40e3-9cb3-e1c3b50baeea is in state SUCCESS 2025-05-05 01:09:36.779493 | orchestrator | True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1113968, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2404604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1113833, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2034597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1113833, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2034597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1113833, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2034597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1113819, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.1984596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1113819, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.1984596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1113819, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.1984596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1113839, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.23446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1113839, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.23446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1113839, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.23446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1113823, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2004597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1113823, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2004597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1113823, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2004597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1113962, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2404604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1113962, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2404604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1113962, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2404604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1113935, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2364602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1113935, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2364602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1113935, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2364602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1113978, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2414603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1113978, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2414603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1113978, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2414603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1113832, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2024596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1113832, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2024596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1113832, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2024596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1113946, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2384603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1113946, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2384603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1113946, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2384603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1113820, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.1994596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1113820, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.1994596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1113820, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.1994596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1113824, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2014596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1113824, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2014596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1113824, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2014596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1113941, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2364602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.779994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1113941, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2364602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1113941, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2364602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1114047, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2594607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1114047, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2594607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1114047, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2594607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1114042, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2544606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1114042, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2544606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1114042, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2544606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1114110, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2644606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1114110, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2644606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1114110, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2644606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1114009, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2444603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1114009, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2444603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1114009, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2444603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1114123, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2654607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1114123, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2654607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1114123, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2654607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1114079, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2604606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1114079, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2604606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1114079, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2604606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1114084, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2624607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1114084, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2624607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1114084, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2624607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1114013, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2454603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1114013, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2454603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1114013, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2454603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1114044, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2544606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1114044, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2544606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1114044, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2544606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1114130, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2664607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1114130, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2664607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1114130, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2664607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1114098, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2634606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1114098, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2634606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1114098, 'dev': 208, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746404092.2634606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1114021, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2494605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1114021, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2494605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1114021, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2494605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1114019, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2464604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1114019, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2464604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1114029, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2494605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1114019, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2464604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1114029, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2494605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1114036, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2534604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1114029, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2494605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1114036, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2534604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1114138, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2674608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1114036, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2534604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1114138, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2674608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1114138, 'dev': 208, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746404092.2674608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-05 01:09:36.780794 | orchestrator | 2025-05-05 01:09:36.780805 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-05 01:09:36.780817 | orchestrator | Monday 05 May 2025 01:08:22 +0000 (0:00:33.133) 0:00:47.405 ************ 2025-05-05 01:09:36.780829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.780845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.780857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-05 01:09:36.780869 | orchestrator | 2025-05-05 01:09:36.780880 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-05 01:09:36.780892 | orchestrator | Monday 05 May 2025 01:08:23 +0000 (0:00:01.151) 0:00:48.557 ************ 2025-05-05 01:09:36.780903 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:09:36.780914 | orchestrator | 2025-05-05 01:09:36.780926 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-05 01:09:36.780937 | orchestrator | Monday 05 May 2025 01:08:25 +0000 (0:00:02.480) 0:00:51.037 ************ 2025-05-05 01:09:36.780948 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:09:36.780960 | orchestrator | 2025-05-05 01:09:36.780971 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-05 01:09:36.780983 | orchestrator | Monday 05 May 2025 01:08:27 +0000 (0:00:02.166) 0:00:53.204 ************ 2025-05-05 01:09:36.780994 | orchestrator | 2025-05-05 01:09:36.781012 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-05 01:09:36.781023 | orchestrator | Monday 05 May 2025 01:08:27 +0000 (0:00:00.056) 0:00:53.261 ************ 2025-05-05 01:09:36.781035 | orchestrator | 2025-05-05 01:09:36.781046 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-05 01:09:36.781057 | orchestrator | Monday 05 May 2025 01:08:27 +0000 (0:00:00.054) 0:00:53.315 ************ 2025-05-05 01:09:36.781068 | orchestrator | 2025-05-05 01:09:36.781080 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-05 01:09:36.781091 | orchestrator | Monday 05 May 2025 01:08:28 +0000 (0:00:00.182) 0:00:53.498 ************ 2025-05-05 01:09:36.781102 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:09:36.781114 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:09:36.781142 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:09:36.781154 | orchestrator | 2025-05-05 01:09:36.781166 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-05 01:09:36.781177 | orchestrator | Monday 05 May 2025 01:08:29 +0000 (0:00:01.754) 0:00:55.253 ************ 2025-05-05 01:09:36.781188 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:09:36.781199 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:09:36.781219 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-05 01:09:36.781232 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-05 01:09:36.781243 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-05 01:09:36.781254 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:09:36.781266 | orchestrator | 2025-05-05 01:09:36.781277 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-05 01:09:36.781293 | orchestrator | Monday 05 May 2025 01:09:08 +0000 (0:00:38.522) 0:01:33.776 ************ 2025-05-05 01:09:36.781304 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:09:36.781316 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:09:36.781327 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:09:36.781338 | orchestrator | 2025-05-05 01:09:36.781349 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-05 01:09:36.781361 | orchestrator | Monday 05 May 2025 01:09:30 +0000 (0:00:21.593) 0:01:55.369 ************ 2025-05-05 01:09:36.781372 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:09:36.781383 | orchestrator | 2025-05-05 01:09:36.781395 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-05 01:09:36.781406 | orchestrator | Monday 05 May 2025 01:09:32 +0000 (0:00:02.109) 0:01:57.479 ************ 2025-05-05 01:09:36.781417 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:09:36.781428 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:09:36.781439 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:09:36.781451 | orchestrator | 2025-05-05 01:09:36.781462 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-05 01:09:36.781473 | orchestrator | Monday 05 May 2025 01:09:32 +0000 (0:00:00.399) 0:01:57.879 ************ 2025-05-05 01:09:36.781486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-05 01:09:36.781500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-05 01:09:36.781512 | orchestrator | 2025-05-05 01:09:36.781523 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-05 01:09:36.781535 | orchestrator | Monday 05 May 2025 01:09:34 +0000 (0:00:02.394) 0:02:00.274 ************ 2025-05-05 01:09:36.781546 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:09:36.781557 | orchestrator | 2025-05-05 01:09:36.781569 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:09:36.781580 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:09:36.781592 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:09:36.781603 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-05 01:09:36.781614 | orchestrator | 2025-05-05 01:09:36.781625 | orchestrator | 2025-05-05 01:09:36.781636 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:09:36.781647 | orchestrator | Monday 05 May 2025 01:09:35 +0000 (0:00:00.432) 0:02:00.707 ************ 2025-05-05 01:09:36.781659 | orchestrator | =============================================================================== 2025-05-05 01:09:36.781676 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.52s 2025-05-05 01:09:36.781688 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.13s 2025-05-05 01:09:36.781699 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 21.59s 2025-05-05 01:09:36.781711 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.48s 2025-05-05 01:09:36.781722 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.39s 2025-05-05 01:09:36.781733 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.17s 2025-05-05 01:09:36.781744 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.11s 2025-05-05 01:09:36.781755 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.75s 2025-05-05 01:09:36.781767 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.64s 2025-05-05 01:09:36.781782 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.54s 2025-05-05 01:09:36.781793 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.45s 2025-05-05 01:09:36.781805 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.43s 2025-05-05 01:09:36.781816 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2025-05-05 01:09:36.781827 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.15s 2025-05-05 01:09:36.781839 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.95s 2025-05-05 01:09:36.781850 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.86s 2025-05-05 01:09:36.781861 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.67s 2025-05-05 01:09:36.781872 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2025-05-05 01:09:36.781884 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.57s 2025-05-05 01:09:36.781895 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.53s 2025-05-05 01:09:36.781910 | orchestrator | 2025-05-05 01:09:36 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:39.829594 | orchestrator | 2025-05-05 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:39.829739 | orchestrator | 2025-05-05 01:09:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:39.830370 | orchestrator | 2025-05-05 01:09:39 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:42.895076 | orchestrator | 2025-05-05 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:42.895250 | orchestrator | 2025-05-05 01:09:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:42.897704 | orchestrator | 2025-05-05 01:09:42 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:42.898085 | orchestrator | 2025-05-05 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:45.951513 | orchestrator | 2025-05-05 01:09:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:45.952840 | orchestrator | 2025-05-05 01:09:45 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:49.009616 | orchestrator | 2025-05-05 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:49.009747 | orchestrator | 2025-05-05 01:09:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:49.010629 | orchestrator | 2025-05-05 01:09:49 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:49.011071 | orchestrator | 2025-05-05 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:52.064290 | orchestrator | 2025-05-05 01:09:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:52.065151 | orchestrator | 2025-05-05 01:09:52 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:52.067425 | orchestrator | 2025-05-05 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:55.115392 | orchestrator | 2025-05-05 01:09:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:58.160345 | orchestrator | 2025-05-05 01:09:55 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:09:58.160464 | orchestrator | 2025-05-05 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:09:58.160502 | orchestrator | 2025-05-05 01:09:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:09:58.162646 | orchestrator | 2025-05-05 01:09:58 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:01.219736 | orchestrator | 2025-05-05 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:01.219943 | orchestrator | 2025-05-05 01:10:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:01.222452 | orchestrator | 2025-05-05 01:10:01 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:04.270497 | orchestrator | 2025-05-05 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:04.270643 | orchestrator | 2025-05-05 01:10:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:04.271398 | orchestrator | 2025-05-05 01:10:04 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:07.322388 | orchestrator | 2025-05-05 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:07.322538 | orchestrator | 2025-05-05 01:10:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:07.323155 | orchestrator | 2025-05-05 01:10:07 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:10.374162 | orchestrator | 2025-05-05 01:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:10.374324 | orchestrator | 2025-05-05 01:10:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:10.377769 | orchestrator | 2025-05-05 01:10:10 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:10.378146 | orchestrator | 2025-05-05 01:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:13.440745 | orchestrator | 2025-05-05 01:10:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:13.441638 | orchestrator | 2025-05-05 01:10:13 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:13.441818 | orchestrator | 2025-05-05 01:10:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:16.492328 | orchestrator | 2025-05-05 01:10:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:16.494655 | orchestrator | 2025-05-05 01:10:16 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:19.540687 | orchestrator | 2025-05-05 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:19.540836 | orchestrator | 2025-05-05 01:10:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:19.541444 | orchestrator | 2025-05-05 01:10:19 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:22.586568 | orchestrator | 2025-05-05 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:22.586707 | orchestrator | 2025-05-05 01:10:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:22.586999 | orchestrator | 2025-05-05 01:10:22 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:25.633330 | orchestrator | 2025-05-05 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:25.633606 | orchestrator | 2025-05-05 01:10:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:25.633716 | orchestrator | 2025-05-05 01:10:25 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:28.688222 | orchestrator | 2025-05-05 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:28.688407 | orchestrator | 2025-05-05 01:10:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:28.688488 | orchestrator | 2025-05-05 01:10:28 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:28.688514 | orchestrator | 2025-05-05 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:31.729124 | orchestrator | 2025-05-05 01:10:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:31.733211 | orchestrator | 2025-05-05 01:10:31 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:34.789323 | orchestrator | 2025-05-05 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:34.789463 | orchestrator | 2025-05-05 01:10:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:34.791695 | orchestrator | 2025-05-05 01:10:34 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:34.791937 | orchestrator | 2025-05-05 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:37.840159 | orchestrator | 2025-05-05 01:10:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:37.842307 | orchestrator | 2025-05-05 01:10:37 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:40.885137 | orchestrator | 2025-05-05 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:40.885293 | orchestrator | 2025-05-05 01:10:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:40.887015 | orchestrator | 2025-05-05 01:10:40 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:40.887630 | orchestrator | 2025-05-05 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:43.951825 | orchestrator | 2025-05-05 01:10:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:43.953321 | orchestrator | 2025-05-05 01:10:43 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:43.953691 | orchestrator | 2025-05-05 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:46.998929 | orchestrator | 2025-05-05 01:10:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:46.999427 | orchestrator | 2025-05-05 01:10:47 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:50.048118 | orchestrator | 2025-05-05 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:50.048237 | orchestrator | 2025-05-05 01:10:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:53.092487 | orchestrator | 2025-05-05 01:10:50 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:53.092596 | orchestrator | 2025-05-05 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:53.092630 | orchestrator | 2025-05-05 01:10:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:53.098849 | orchestrator | 2025-05-05 01:10:53 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:56.136233 | orchestrator | 2025-05-05 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:56.136375 | orchestrator | 2025-05-05 01:10:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:56.142815 | orchestrator | 2025-05-05 01:10:56 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:10:59.191256 | orchestrator | 2025-05-05 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:10:59.191403 | orchestrator | 2025-05-05 01:10:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:10:59.192874 | orchestrator | 2025-05-05 01:10:59 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:02.245887 | orchestrator | 2025-05-05 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:02.246129 | orchestrator | 2025-05-05 01:11:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:02.247243 | orchestrator | 2025-05-05 01:11:02 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:05.290384 | orchestrator | 2025-05-05 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:05.290529 | orchestrator | 2025-05-05 01:11:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:05.291724 | orchestrator | 2025-05-05 01:11:05 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:08.340441 | orchestrator | 2025-05-05 01:11:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:08.340535 | orchestrator | 2025-05-05 01:11:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:08.341929 | orchestrator | 2025-05-05 01:11:08 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:08.342138 | orchestrator | 2025-05-05 01:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:11.381660 | orchestrator | 2025-05-05 01:11:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:14.426355 | orchestrator | 2025-05-05 01:11:11 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:14.426517 | orchestrator | 2025-05-05 01:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:14.426576 | orchestrator | 2025-05-05 01:11:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:14.426703 | orchestrator | 2025-05-05 01:11:14 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:17.494306 | orchestrator | 2025-05-05 01:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:17.494437 | orchestrator | 2025-05-05 01:11:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:17.496688 | orchestrator | 2025-05-05 01:11:17 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:20.539505 | orchestrator | 2025-05-05 01:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:20.539709 | orchestrator | 2025-05-05 01:11:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:20.540966 | orchestrator | 2025-05-05 01:11:20 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:23.595107 | orchestrator | 2025-05-05 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:23.595253 | orchestrator | 2025-05-05 01:11:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:23.596103 | orchestrator | 2025-05-05 01:11:23 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:26.649496 | orchestrator | 2025-05-05 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:26.649656 | orchestrator | 2025-05-05 01:11:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:26.649763 | orchestrator | 2025-05-05 01:11:26 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:29.699597 | orchestrator | 2025-05-05 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:29.699743 | orchestrator | 2025-05-05 01:11:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:29.700856 | orchestrator | 2025-05-05 01:11:29 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:32.741853 | orchestrator | 2025-05-05 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:32.742130 | orchestrator | 2025-05-05 01:11:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:32.742477 | orchestrator | 2025-05-05 01:11:32 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:35.790319 | orchestrator | 2025-05-05 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:35.790480 | orchestrator | 2025-05-05 01:11:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:35.793653 | orchestrator | 2025-05-05 01:11:35 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:38.839431 | orchestrator | 2025-05-05 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:38.839567 | orchestrator | 2025-05-05 01:11:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:38.841043 | orchestrator | 2025-05-05 01:11:38 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:41.889008 | orchestrator | 2025-05-05 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:41.889158 | orchestrator | 2025-05-05 01:11:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:41.892247 | orchestrator | 2025-05-05 01:11:41 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:44.941926 | orchestrator | 2025-05-05 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:44.942158 | orchestrator | 2025-05-05 01:11:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:44.942244 | orchestrator | 2025-05-05 01:11:44 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:44.942268 | orchestrator | 2025-05-05 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:47.990363 | orchestrator | 2025-05-05 01:11:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:47.991910 | orchestrator | 2025-05-05 01:11:47 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:47.992534 | orchestrator | 2025-05-05 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:51.037595 | orchestrator | 2025-05-05 01:11:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:51.039019 | orchestrator | 2025-05-05 01:11:51 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:54.082315 | orchestrator | 2025-05-05 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:54.082466 | orchestrator | 2025-05-05 01:11:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:54.082541 | orchestrator | 2025-05-05 01:11:54 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:54.082564 | orchestrator | 2025-05-05 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:11:57.134410 | orchestrator | 2025-05-05 01:11:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:11:57.135470 | orchestrator | 2025-05-05 01:11:57 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:11:57.135619 | orchestrator | 2025-05-05 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:00.198648 | orchestrator | 2025-05-05 01:12:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:00.203128 | orchestrator | 2025-05-05 01:12:00 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:00.205244 | orchestrator | 2025-05-05 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:03.254345 | orchestrator | 2025-05-05 01:12:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:03.255025 | orchestrator | 2025-05-05 01:12:03 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:03.255420 | orchestrator | 2025-05-05 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:06.302304 | orchestrator | 2025-05-05 01:12:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:06.304661 | orchestrator | 2025-05-05 01:12:06 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:09.355759 | orchestrator | 2025-05-05 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:09.355894 | orchestrator | 2025-05-05 01:12:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:09.358461 | orchestrator | 2025-05-05 01:12:09 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:12.406913 | orchestrator | 2025-05-05 01:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:12.407110 | orchestrator | 2025-05-05 01:12:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:12.408158 | orchestrator | 2025-05-05 01:12:12 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:15.457444 | orchestrator | 2025-05-05 01:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:15.457586 | orchestrator | 2025-05-05 01:12:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:15.458439 | orchestrator | 2025-05-05 01:12:15 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:18.505119 | orchestrator | 2025-05-05 01:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:18.505271 | orchestrator | 2025-05-05 01:12:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:21.554347 | orchestrator | 2025-05-05 01:12:18 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:21.554469 | orchestrator | 2025-05-05 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:21.554506 | orchestrator | 2025-05-05 01:12:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:21.555222 | orchestrator | 2025-05-05 01:12:21 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:21.555350 | orchestrator | 2025-05-05 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:24.599174 | orchestrator | 2025-05-05 01:12:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:24.600821 | orchestrator | 2025-05-05 01:12:24 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:27.654281 | orchestrator | 2025-05-05 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:27.654450 | orchestrator | 2025-05-05 01:12:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:27.654984 | orchestrator | 2025-05-05 01:12:27 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:30.708796 | orchestrator | 2025-05-05 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:30.708967 | orchestrator | 2025-05-05 01:12:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:30.710054 | orchestrator | 2025-05-05 01:12:30 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:33.767824 | orchestrator | 2025-05-05 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:33.768062 | orchestrator | 2025-05-05 01:12:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:33.769042 | orchestrator | 2025-05-05 01:12:33 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:36.822322 | orchestrator | 2025-05-05 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:36.822465 | orchestrator | 2025-05-05 01:12:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:36.825119 | orchestrator | 2025-05-05 01:12:36 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:39.871251 | orchestrator | 2025-05-05 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:39.871418 | orchestrator | 2025-05-05 01:12:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:39.873222 | orchestrator | 2025-05-05 01:12:39 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:42.918582 | orchestrator | 2025-05-05 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:42.918730 | orchestrator | 2025-05-05 01:12:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:42.919772 | orchestrator | 2025-05-05 01:12:42 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:42.919984 | orchestrator | 2025-05-05 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:45.963744 | orchestrator | 2025-05-05 01:12:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:45.964024 | orchestrator | 2025-05-05 01:12:45 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:49.004226 | orchestrator | 2025-05-05 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:49.004414 | orchestrator | 2025-05-05 01:12:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:49.004886 | orchestrator | 2025-05-05 01:12:49 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:52.055718 | orchestrator | 2025-05-05 01:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:52.055866 | orchestrator | 2025-05-05 01:12:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:52.057604 | orchestrator | 2025-05-05 01:12:52 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:55.097540 | orchestrator | 2025-05-05 01:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:55.097692 | orchestrator | 2025-05-05 01:12:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:55.098667 | orchestrator | 2025-05-05 01:12:55 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:12:58.144239 | orchestrator | 2025-05-05 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:12:58.144376 | orchestrator | 2025-05-05 01:12:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:12:58.145619 | orchestrator | 2025-05-05 01:12:58 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:01.190389 | orchestrator | 2025-05-05 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:01.190534 | orchestrator | 2025-05-05 01:13:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:01.191512 | orchestrator | 2025-05-05 01:13:01 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:04.251021 | orchestrator | 2025-05-05 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:04.251205 | orchestrator | 2025-05-05 01:13:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:04.252816 | orchestrator | 2025-05-05 01:13:04 | INFO  | Task 66961ca5-60dc-41d4-8c8a-e018587e63e5 is in state STARTED 2025-05-05 01:13:04.254143 | orchestrator | 2025-05-05 01:13:04 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:04.254840 | orchestrator | 2025-05-05 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:07.309621 | orchestrator | 2025-05-05 01:13:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:07.316046 | orchestrator | 2025-05-05 01:13:07 | INFO  | Task 66961ca5-60dc-41d4-8c8a-e018587e63e5 is in state STARTED 2025-05-05 01:13:07.318773 | orchestrator | 2025-05-05 01:13:07 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:10.349156 | orchestrator | 2025-05-05 01:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:10.349276 | orchestrator | 2025-05-05 01:13:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:10.349457 | orchestrator | 2025-05-05 01:13:10 | INFO  | Task 66961ca5-60dc-41d4-8c8a-e018587e63e5 is in state STARTED 2025-05-05 01:13:10.349976 | orchestrator | 2025-05-05 01:13:10 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:10.350247 | orchestrator | 2025-05-05 01:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:13.372288 | orchestrator | 2025-05-05 01:13:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:13.372646 | orchestrator | 2025-05-05 01:13:13 | INFO  | Task 66961ca5-60dc-41d4-8c8a-e018587e63e5 is in state SUCCESS 2025-05-05 01:13:13.373491 | orchestrator | 2025-05-05 01:13:13 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:16.411545 | orchestrator | 2025-05-05 01:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:16.411663 | orchestrator | 2025-05-05 01:13:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:16.414486 | orchestrator | 2025-05-05 01:13:16 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:19.443525 | orchestrator | 2025-05-05 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:19.443655 | orchestrator | 2025-05-05 01:13:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:19.444397 | orchestrator | 2025-05-05 01:13:19 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:22.510317 | orchestrator | 2025-05-05 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:22.510467 | orchestrator | 2025-05-05 01:13:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:22.512215 | orchestrator | 2025-05-05 01:13:22 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:25.593321 | orchestrator | 2025-05-05 01:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:25.593455 | orchestrator | 2025-05-05 01:13:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:25.594968 | orchestrator | 2025-05-05 01:13:25 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:28.641790 | orchestrator | 2025-05-05 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:28.641992 | orchestrator | 2025-05-05 01:13:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:28.643040 | orchestrator | 2025-05-05 01:13:28 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:31.692341 | orchestrator | 2025-05-05 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:31.692489 | orchestrator | 2025-05-05 01:13:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:31.694205 | orchestrator | 2025-05-05 01:13:31 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:34.749678 | orchestrator | 2025-05-05 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:34.749777 | orchestrator | 2025-05-05 01:13:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:34.750820 | orchestrator | 2025-05-05 01:13:34 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:34.750994 | orchestrator | 2025-05-05 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:37.807528 | orchestrator | 2025-05-05 01:13:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:37.809207 | orchestrator | 2025-05-05 01:13:37 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:40.860789 | orchestrator | 2025-05-05 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:40.860976 | orchestrator | 2025-05-05 01:13:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:40.861845 | orchestrator | 2025-05-05 01:13:40 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:40.861949 | orchestrator | 2025-05-05 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:43.913044 | orchestrator | 2025-05-05 01:13:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:43.915097 | orchestrator | 2025-05-05 01:13:43 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:46.969528 | orchestrator | 2025-05-05 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:46.969672 | orchestrator | 2025-05-05 01:13:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:46.971164 | orchestrator | 2025-05-05 01:13:46 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state STARTED 2025-05-05 01:13:50.019404 | orchestrator | 2025-05-05 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:50.019543 | orchestrator | 2025-05-05 01:13:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:50.023225 | orchestrator | 2025-05-05 01:13:50 | INFO  | Task 0a30d8d7-e79e-4220-ab0f-c15d2dd1581e is in state SUCCESS 2025-05-05 01:13:50.024932 | orchestrator | 2025-05-05 01:13:50.024983 | orchestrator | None 2025-05-05 01:13:50.024998 | orchestrator | 2025-05-05 01:13:50.025054 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-05 01:13:50.025083 | orchestrator | 2025-05-05 01:13:50.025099 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-05 01:13:50.025113 | orchestrator | Monday 05 May 2025 01:05:33 +0000 (0:00:00.330) 0:00:00.330 ************ 2025-05-05 01:13:50.025197 | orchestrator | changed: [testbed-manager] 2025-05-05 01:13:50.025227 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.025339 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.025378 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.025394 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.025408 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.025422 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.025436 | orchestrator | 2025-05-05 01:13:50.025451 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-05 01:13:50.025465 | orchestrator | Monday 05 May 2025 01:05:34 +0000 (0:00:00.920) 0:00:01.250 ************ 2025-05-05 01:13:50.025479 | orchestrator | changed: [testbed-manager] 2025-05-05 01:13:50.025495 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.025510 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.025526 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.025542 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.025559 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.025575 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.025590 | orchestrator | 2025-05-05 01:13:50.025607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-05 01:13:50.025623 | orchestrator | Monday 05 May 2025 01:05:36 +0000 (0:00:02.077) 0:00:03.328 ************ 2025-05-05 01:13:50.025639 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-05 01:13:50.025657 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-05 01:13:50.025797 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-05 01:13:50.025837 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-05 01:13:50.025906 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-05 01:13:50.025922 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-05 01:13:50.025936 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-05 01:13:50.025950 | orchestrator | 2025-05-05 01:13:50.025965 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-05 01:13:50.025979 | orchestrator | 2025-05-05 01:13:50.026004 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-05 01:13:50.026098 | orchestrator | Monday 05 May 2025 01:05:37 +0000 (0:00:01.715) 0:00:05.043 ************ 2025-05-05 01:13:50.026142 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:13:50.026158 | orchestrator | 2025-05-05 01:13:50.026172 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-05 01:13:50.026186 | orchestrator | Monday 05 May 2025 01:05:39 +0000 (0:00:01.081) 0:00:06.124 ************ 2025-05-05 01:13:50.026201 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-05 01:13:50.026215 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-05 01:13:50.026229 | orchestrator | 2025-05-05 01:13:50.026244 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-05 01:13:50.026258 | orchestrator | Monday 05 May 2025 01:05:43 +0000 (0:00:04.879) 0:00:11.004 ************ 2025-05-05 01:13:50.026272 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 01:13:50.026286 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-05 01:13:50.026300 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.026353 | orchestrator | 2025-05-05 01:13:50.026382 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-05 01:13:50.026417 | orchestrator | Monday 05 May 2025 01:05:48 +0000 (0:00:04.303) 0:00:15.308 ************ 2025-05-05 01:13:50.026432 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.026454 | orchestrator | 2025-05-05 01:13:50.026469 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-05 01:13:50.026483 | orchestrator | Monday 05 May 2025 01:05:49 +0000 (0:00:00.751) 0:00:16.060 ************ 2025-05-05 01:13:50.026497 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.026512 | orchestrator | 2025-05-05 01:13:50.026526 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-05 01:13:50.026540 | orchestrator | Monday 05 May 2025 01:05:50 +0000 (0:00:01.774) 0:00:17.834 ************ 2025-05-05 01:13:50.026554 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.026568 | orchestrator | 2025-05-05 01:13:50.026582 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-05 01:13:50.026601 | orchestrator | Monday 05 May 2025 01:05:56 +0000 (0:00:05.821) 0:00:23.656 ************ 2025-05-05 01:13:50.026615 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.026629 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.026644 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.026658 | orchestrator | 2025-05-05 01:13:50.026672 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-05 01:13:50.026686 | orchestrator | Monday 05 May 2025 01:05:58 +0000 (0:00:01.457) 0:00:25.114 ************ 2025-05-05 01:13:50.026700 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.026714 | orchestrator | 2025-05-05 01:13:50.026729 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-05 01:13:50.026743 | orchestrator | Monday 05 May 2025 01:06:31 +0000 (0:00:33.053) 0:00:58.168 ************ 2025-05-05 01:13:50.026758 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.026773 | orchestrator | 2025-05-05 01:13:50.026787 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-05 01:13:50.026801 | orchestrator | Monday 05 May 2025 01:06:44 +0000 (0:00:13.040) 0:01:11.208 ************ 2025-05-05 01:13:50.026816 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.026830 | orchestrator | 2025-05-05 01:13:50.026882 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-05 01:13:50.026899 | orchestrator | Monday 05 May 2025 01:06:54 +0000 (0:00:10.545) 0:01:21.754 ************ 2025-05-05 01:13:50.026945 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.026962 | orchestrator | 2025-05-05 01:13:50.026977 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-05 01:13:50.027065 | orchestrator | Monday 05 May 2025 01:06:56 +0000 (0:00:01.340) 0:01:23.095 ************ 2025-05-05 01:13:50.027080 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.027094 | orchestrator | 2025-05-05 01:13:50.027108 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-05 01:13:50.027133 | orchestrator | Monday 05 May 2025 01:06:56 +0000 (0:00:00.911) 0:01:24.006 ************ 2025-05-05 01:13:50.027148 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:13:50.027162 | orchestrator | 2025-05-05 01:13:50.027176 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-05 01:13:50.027191 | orchestrator | Monday 05 May 2025 01:06:58 +0000 (0:00:01.169) 0:01:25.175 ************ 2025-05-05 01:13:50.027205 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.027219 | orchestrator | 2025-05-05 01:13:50.027233 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-05 01:13:50.027247 | orchestrator | Monday 05 May 2025 01:07:13 +0000 (0:00:15.436) 0:01:40.612 ************ 2025-05-05 01:13:50.027261 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.027275 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.027289 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.027303 | orchestrator | 2025-05-05 01:13:50.027318 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-05 01:13:50.027331 | orchestrator | 2025-05-05 01:13:50.027345 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-05 01:13:50.027360 | orchestrator | Monday 05 May 2025 01:07:13 +0000 (0:00:00.263) 0:01:40.876 ************ 2025-05-05 01:13:50.027374 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:13:50.027388 | orchestrator | 2025-05-05 01:13:50.027402 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-05 01:13:50.027416 | orchestrator | Monday 05 May 2025 01:07:14 +0000 (0:00:00.626) 0:01:41.502 ************ 2025-05-05 01:13:50.027430 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.027460 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.027474 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.027499 | orchestrator | 2025-05-05 01:13:50.027514 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-05 01:13:50.027529 | orchestrator | Monday 05 May 2025 01:07:16 +0000 (0:00:02.182) 0:01:43.685 ************ 2025-05-05 01:13:50.027582 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.027597 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.027612 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.027626 | orchestrator | 2025-05-05 01:13:50.027640 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-05 01:13:50.027654 | orchestrator | Monday 05 May 2025 01:07:18 +0000 (0:00:02.143) 0:01:45.829 ************ 2025-05-05 01:13:50.027668 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.027683 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.027697 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.027711 | orchestrator | 2025-05-05 01:13:50.027725 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-05 01:13:50.027739 | orchestrator | Monday 05 May 2025 01:07:19 +0000 (0:00:00.458) 0:01:46.288 ************ 2025-05-05 01:13:50.027753 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-05 01:13:50.027767 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.027781 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-05 01:13:50.027795 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.027809 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-05 01:13:50.027824 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-05 01:13:50.027838 | orchestrator | 2025-05-05 01:13:50.027906 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-05 01:13:50.027922 | orchestrator | Monday 05 May 2025 01:07:27 +0000 (0:00:08.228) 0:01:54.516 ************ 2025-05-05 01:13:50.027936 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.027951 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.027966 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.027980 | orchestrator | 2025-05-05 01:13:50.028003 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-05 01:13:50.028023 | orchestrator | Monday 05 May 2025 01:07:27 +0000 (0:00:00.324) 0:01:54.840 ************ 2025-05-05 01:13:50.028037 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-05 01:13:50.028051 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.028064 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-05 01:13:50.028076 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028089 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-05 01:13:50.028101 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028114 | orchestrator | 2025-05-05 01:13:50.028126 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-05 01:13:50.028139 | orchestrator | Monday 05 May 2025 01:07:28 +0000 (0:00:00.903) 0:01:55.744 ************ 2025-05-05 01:13:50.028151 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028164 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028176 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.028189 | orchestrator | 2025-05-05 01:13:50.028201 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-05 01:13:50.028213 | orchestrator | Monday 05 May 2025 01:07:29 +0000 (0:00:00.450) 0:01:56.194 ************ 2025-05-05 01:13:50.028226 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028238 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028251 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.028263 | orchestrator | 2025-05-05 01:13:50.028275 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-05 01:13:50.028288 | orchestrator | Monday 05 May 2025 01:07:30 +0000 (0:00:00.933) 0:01:57.128 ************ 2025-05-05 01:13:50.028301 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028322 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028335 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.028348 | orchestrator | 2025-05-05 01:13:50.028361 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-05 01:13:50.028373 | orchestrator | Monday 05 May 2025 01:07:32 +0000 (0:00:02.247) 0:01:59.376 ************ 2025-05-05 01:13:50.028386 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028398 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028411 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.028424 | orchestrator | 2025-05-05 01:13:50.028437 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-05 01:13:50.028449 | orchestrator | Monday 05 May 2025 01:07:52 +0000 (0:00:19.753) 0:02:19.129 ************ 2025-05-05 01:13:50.028462 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028474 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028487 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.028499 | orchestrator | 2025-05-05 01:13:50.028512 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-05 01:13:50.028524 | orchestrator | Monday 05 May 2025 01:08:01 +0000 (0:00:09.862) 0:02:28.992 ************ 2025-05-05 01:13:50.028537 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.028549 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028561 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028574 | orchestrator | 2025-05-05 01:13:50.028587 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-05 01:13:50.028599 | orchestrator | Monday 05 May 2025 01:08:03 +0000 (0:00:01.282) 0:02:30.274 ************ 2025-05-05 01:13:50.028611 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028629 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028641 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.028654 | orchestrator | 2025-05-05 01:13:50.028666 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-05 01:13:50.028679 | orchestrator | Monday 05 May 2025 01:08:13 +0000 (0:00:10.447) 0:02:40.722 ************ 2025-05-05 01:13:50.028691 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.028711 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028723 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028736 | orchestrator | 2025-05-05 01:13:50.028749 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-05 01:13:50.028761 | orchestrator | Monday 05 May 2025 01:08:15 +0000 (0:00:01.577) 0:02:42.300 ************ 2025-05-05 01:13:50.028774 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.028786 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.028799 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.028811 | orchestrator | 2025-05-05 01:13:50.028824 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-05 01:13:50.028836 | orchestrator | 2025-05-05 01:13:50.028866 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-05 01:13:50.028879 | orchestrator | Monday 05 May 2025 01:08:15 +0000 (0:00:00.556) 0:02:42.856 ************ 2025-05-05 01:13:50.028891 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:13:50.028905 | orchestrator | 2025-05-05 01:13:50.028918 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-05 01:13:50.028931 | orchestrator | Monday 05 May 2025 01:08:16 +0000 (0:00:00.910) 0:02:43.767 ************ 2025-05-05 01:13:50.028943 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-05 01:13:50.028956 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-05 01:13:50.028968 | orchestrator | 2025-05-05 01:13:50.028981 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-05 01:13:50.028993 | orchestrator | Monday 05 May 2025 01:08:19 +0000 (0:00:03.176) 0:02:46.944 ************ 2025-05-05 01:13:50.029006 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-05 01:13:50.029020 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-05 01:13:50.029033 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-05 01:13:50.029047 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-05 01:13:50.029059 | orchestrator | 2025-05-05 01:13:50.029072 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-05 01:13:50.029085 | orchestrator | Monday 05 May 2025 01:08:26 +0000 (0:00:06.349) 0:02:53.293 ************ 2025-05-05 01:13:50.029097 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-05 01:13:50.029114 | orchestrator | 2025-05-05 01:13:50.029135 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-05 01:13:50.029155 | orchestrator | Monday 05 May 2025 01:08:29 +0000 (0:00:03.107) 0:02:56.400 ************ 2025-05-05 01:13:50.029175 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-05 01:13:50.029195 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-05 01:13:50.029214 | orchestrator | 2025-05-05 01:13:50.029232 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-05 01:13:50.029253 | orchestrator | Monday 05 May 2025 01:08:33 +0000 (0:00:03.760) 0:03:00.161 ************ 2025-05-05 01:13:50.029275 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-05 01:13:50.029295 | orchestrator | 2025-05-05 01:13:50.029332 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-05 01:13:50.029346 | orchestrator | Monday 05 May 2025 01:08:36 +0000 (0:00:03.137) 0:03:03.299 ************ 2025-05-05 01:13:50.029359 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-05 01:13:50.029372 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-05 01:13:50.029384 | orchestrator | 2025-05-05 01:13:50.029397 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-05 01:13:50.029425 | orchestrator | Monday 05 May 2025 01:08:44 +0000 (0:00:08.525) 0:03:11.824 ************ 2025-05-05 01:13:50.029442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.029459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.029475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.029489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.029511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.029531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.029545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.029558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.029572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.029585 | orchestrator | 2025-05-05 01:13:50.029598 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-05 01:13:50.029611 | orchestrator | Monday 05 May 2025 01:08:46 +0000 (0:00:01.718) 0:03:13.543 ************ 2025-05-05 01:13:50.029624 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.029637 | orchestrator | 2025-05-05 01:13:50.029649 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-05 01:13:50.029668 | orchestrator | Monday 05 May 2025 01:08:46 +0000 (0:00:00.136) 0:03:13.680 ************ 2025-05-05 01:13:50.029681 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.029694 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.029706 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.029719 | orchestrator | 2025-05-05 01:13:50.029731 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-05 01:13:50.029744 | orchestrator | Monday 05 May 2025 01:08:47 +0000 (0:00:00.413) 0:03:14.094 ************ 2025-05-05 01:13:50.029756 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-05 01:13:50.029769 | orchestrator | 2025-05-05 01:13:50.029786 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-05 01:13:50.029799 | orchestrator | Monday 05 May 2025 01:08:47 +0000 (0:00:00.396) 0:03:14.490 ************ 2025-05-05 01:13:50.029812 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.029824 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.029836 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.029871 | orchestrator | 2025-05-05 01:13:50.029885 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-05 01:13:50.029897 | orchestrator | Monday 05 May 2025 01:08:47 +0000 (0:00:00.264) 0:03:14.755 ************ 2025-05-05 01:13:50.029910 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:13:50.029923 | orchestrator | 2025-05-05 01:13:50.029936 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-05 01:13:50.029948 | orchestrator | Monday 05 May 2025 01:08:48 +0000 (0:00:00.781) 0:03:15.536 ************ 2025-05-05 01:13:50.029962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.029976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030092 | orchestrator | 2025-05-05 01:13:50.030105 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-05 01:13:50.030118 | orchestrator | Monday 05 May 2025 01:08:51 +0000 (0:00:02.566) 0:03:18.102 ************ 2025-05-05 01:13:50.030131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.030151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030170 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.030184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.030197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030210 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.030224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.030244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030257 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.030269 | orchestrator | 2025-05-05 01:13:50.030282 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-05 01:13:50.030295 | orchestrator | Monday 05 May 2025 01:08:51 +0000 (0:00:00.767) 0:03:18.870 ************ 2025-05-05 01:13:50.030316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.030360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030374 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.030388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.030408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030421 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.030443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.030457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030471 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.030484 | orchestrator | 2025-05-05 01:13:50.030497 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-05 01:13:50.030509 | orchestrator | Monday 05 May 2025 01:08:53 +0000 (0:00:01.240) 0:03:20.111 ************ 2025-05-05 01:13:50.030523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030693 | orchestrator | 2025-05-05 01:13:50.030706 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-05 01:13:50.030718 | orchestrator | Monday 05 May 2025 01:08:55 +0000 (0:00:02.795) 0:03:22.906 ************ 2025-05-05 01:13:50.030731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.030798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.030958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.030993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031013 | orchestrator | 2025-05-05 01:13:50.031036 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-05 01:13:50.031060 | orchestrator | Monday 05 May 2025 01:09:02 +0000 (0:00:06.486) 0:03:29.393 ************ 2025-05-05 01:13:50.031080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.031103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031130 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.031154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.031172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031194 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.031210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-05 01:13:50.031230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031252 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.031262 | orchestrator | 2025-05-05 01:13:50.031272 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-05 01:13:50.031283 | orchestrator | Monday 05 May 2025 01:09:03 +0000 (0:00:00.773) 0:03:30.167 ************ 2025-05-05 01:13:50.031293 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.031303 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.031313 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.031324 | orchestrator | 2025-05-05 01:13:50.031334 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-05 01:13:50.031344 | orchestrator | Monday 05 May 2025 01:09:04 +0000 (0:00:01.798) 0:03:31.965 ************ 2025-05-05 01:13:50.031359 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.031369 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.031379 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.031390 | orchestrator | 2025-05-05 01:13:50.031400 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-05 01:13:50.031414 | orchestrator | Monday 05 May 2025 01:09:05 +0000 (0:00:00.554) 0:03:32.520 ************ 2025-05-05 01:13:50.031425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.031441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.031462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-05 01:13:50.031477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.031933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.031975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.031986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.032016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.032027 | orchestrator | 2025-05-05 01:13:50.032038 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-05 01:13:50.032049 | orchestrator | Monday 05 May 2025 01:09:07 +0000 (0:00:02.135) 0:03:34.656 ************ 2025-05-05 01:13:50.032059 | orchestrator | 2025-05-05 01:13:50.032071 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-05 01:13:50.032089 | orchestrator | Monday 05 May 2025 01:09:07 +0000 (0:00:00.240) 0:03:34.896 ************ 2025-05-05 01:13:50.032108 | orchestrator | 2025-05-05 01:13:50.032127 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-05 01:13:50.032143 | orchestrator | Monday 05 May 2025 01:09:07 +0000 (0:00:00.106) 0:03:35.003 ************ 2025-05-05 01:13:50.032161 | orchestrator | 2025-05-05 01:13:50.032374 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-05 01:13:50.032403 | orchestrator | Monday 05 May 2025 01:09:08 +0000 (0:00:00.329) 0:03:35.332 ************ 2025-05-05 01:13:50.032414 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.032425 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.032435 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.032446 | orchestrator | 2025-05-05 01:13:50.032456 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-05 01:13:50.032466 | orchestrator | Monday 05 May 2025 01:09:24 +0000 (0:00:16.632) 0:03:51.964 ************ 2025-05-05 01:13:50.032477 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.032487 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.032497 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.032507 | orchestrator | 2025-05-05 01:13:50.032517 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-05 01:13:50.032527 | orchestrator | 2025-05-05 01:13:50.032538 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-05 01:13:50.032548 | orchestrator | Monday 05 May 2025 01:09:35 +0000 (0:00:10.642) 0:04:02.607 ************ 2025-05-05 01:13:50.032560 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:13:50.032573 | orchestrator | 2025-05-05 01:13:50.032585 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-05 01:13:50.032597 | orchestrator | Monday 05 May 2025 01:09:36 +0000 (0:00:01.439) 0:04:04.046 ************ 2025-05-05 01:13:50.032608 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.032619 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.032631 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.032643 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.032654 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.032695 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.032708 | orchestrator | 2025-05-05 01:13:50.032788 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-05 01:13:50.032802 | orchestrator | Monday 05 May 2025 01:09:37 +0000 (0:00:00.737) 0:04:04.784 ************ 2025-05-05 01:13:50.032814 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.034220 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.034285 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.034303 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:13:50.034319 | orchestrator | 2025-05-05 01:13:50.034334 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-05 01:13:50.034349 | orchestrator | Monday 05 May 2025 01:09:38 +0000 (0:00:01.227) 0:04:06.011 ************ 2025-05-05 01:13:50.034364 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-05 01:13:50.034379 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-05 01:13:50.034394 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-05 01:13:50.034408 | orchestrator | 2025-05-05 01:13:50.034422 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-05 01:13:50.034436 | orchestrator | Monday 05 May 2025 01:09:39 +0000 (0:00:00.634) 0:04:06.646 ************ 2025-05-05 01:13:50.034451 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-05 01:13:50.034465 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-05 01:13:50.034479 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-05 01:13:50.034493 | orchestrator | 2025-05-05 01:13:50.034507 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-05 01:13:50.034521 | orchestrator | Monday 05 May 2025 01:09:41 +0000 (0:00:01.406) 0:04:08.052 ************ 2025-05-05 01:13:50.034535 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-05 01:13:50.034549 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.034564 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-05 01:13:50.034601 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.034617 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-05 01:13:50.034631 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.034645 | orchestrator | 2025-05-05 01:13:50.034659 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-05 01:13:50.034674 | orchestrator | Monday 05 May 2025 01:09:42 +0000 (0:00:01.006) 0:04:09.058 ************ 2025-05-05 01:13:50.034688 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-05 01:13:50.034702 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-05 01:13:50.034716 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.034730 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-05 01:13:50.034744 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-05 01:13:50.034758 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-05 01:13:50.034784 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-05 01:13:50.034799 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.034818 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-05 01:13:50.034832 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-05 01:13:50.034876 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.034891 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-05 01:13:50.034906 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-05 01:13:50.034920 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-05 01:13:50.034934 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-05 01:13:50.034948 | orchestrator | 2025-05-05 01:13:50.035147 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-05 01:13:50.035172 | orchestrator | Monday 05 May 2025 01:09:43 +0000 (0:00:01.984) 0:04:11.043 ************ 2025-05-05 01:13:50.035187 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.035201 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.035216 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.035230 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.035244 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.035258 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.035272 | orchestrator | 2025-05-05 01:13:50.035287 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-05 01:13:50.035301 | orchestrator | Monday 05 May 2025 01:09:45 +0000 (0:00:01.126) 0:04:12.170 ************ 2025-05-05 01:13:50.035315 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.035329 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.035343 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.035358 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.035372 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.035386 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.035400 | orchestrator | 2025-05-05 01:13:50.035425 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-05 01:13:50.035453 | orchestrator | Monday 05 May 2025 01:09:46 +0000 (0:00:01.819) 0:04:13.989 ************ 2025-05-05 01:13:50.035480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.035527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.035555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.035758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.035788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.035834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.035888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.035904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.035921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.035937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.036037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.036073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.036099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.036143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.036226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.036247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.036271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.036299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.036315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.036331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.036346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.036451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.036489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.036504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.036533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.036614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.036685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.036715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.036731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.036810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ser2025-05-05 01:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:50.036832 | orchestrator | ialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.036907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.036935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.036961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.036986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.037185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.037240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.037255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.037301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.037316 | orchestrator | 2025-05-05 01:13:50.037330 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-05 01:13:50.037345 | orchestrator | Monday 05 May 2025 01:09:49 +0000 (0:00:02.519) 0:04:16.509 ************ 2025-05-05 01:13:50.037360 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-05 01:13:50.037376 | orchestrator | 2025-05-05 01:13:50.037391 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-05 01:13:50.037411 | orchestrator | Monday 05 May 2025 01:09:50 +0000 (0:00:01.512) 0:04:18.022 ************ 2025-05-05 01:13:50.037501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.037986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.038001 | orchestrator | 2025-05-05 01:13:50.038047 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-05 01:13:50.038066 | orchestrator | Monday 05 May 2025 01:09:54 +0000 (0:00:03.994) 0:04:22.017 ************ 2025-05-05 01:13:50.038080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.038117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.038209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038228 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.038242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.038255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.038268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038282 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.038305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.038412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.038432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.038445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038472 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.038485 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.038498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.038522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038548 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.038621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.038640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038654 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.038667 | orchestrator | 2025-05-05 01:13:50.038680 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-05 01:13:50.038692 | orchestrator | Monday 05 May 2025 01:09:56 +0000 (0:00:01.786) 0:04:23.803 ************ 2025-05-05 01:13:50.038705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.038719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.038732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038752 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.038835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.038874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.038888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.038921 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.038935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.038959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.038973 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.039012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.039028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.039041 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.039054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.039068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.039087 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.039100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.039114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.039137 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.039151 | orchestrator | 2025-05-05 01:13:50.039164 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-05 01:13:50.039177 | orchestrator | Monday 05 May 2025 01:09:59 +0000 (0:00:02.392) 0:04:26.196 ************ 2025-05-05 01:13:50.039190 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.039203 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.039215 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.039228 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-05 01:13:50.039241 | orchestrator | 2025-05-05 01:13:50.039254 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-05 01:13:50.039289 | orchestrator | Monday 05 May 2025 01:10:00 +0000 (0:00:01.134) 0:04:27.330 ************ 2025-05-05 01:13:50.039303 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-05 01:13:50.039316 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-05 01:13:50.039328 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-05 01:13:50.039341 | orchestrator | 2025-05-05 01:13:50.039354 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-05 01:13:50.039366 | orchestrator | Monday 05 May 2025 01:10:01 +0000 (0:00:00.831) 0:04:28.162 ************ 2025-05-05 01:13:50.039379 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-05 01:13:50.039391 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-05 01:13:50.039404 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-05 01:13:50.039416 | orchestrator | 2025-05-05 01:13:50.039429 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-05 01:13:50.039442 | orchestrator | Monday 05 May 2025 01:10:01 +0000 (0:00:00.881) 0:04:29.044 ************ 2025-05-05 01:13:50.039457 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:13:50.039472 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:13:50.039486 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:13:50.039500 | orchestrator | 2025-05-05 01:13:50.039514 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-05 01:13:50.039528 | orchestrator | Monday 05 May 2025 01:10:02 +0000 (0:00:01.000) 0:04:30.044 ************ 2025-05-05 01:13:50.039542 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:13:50.039556 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:13:50.039571 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:13:50.039585 | orchestrator | 2025-05-05 01:13:50.039599 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-05 01:13:50.039624 | orchestrator | Monday 05 May 2025 01:10:03 +0000 (0:00:00.332) 0:04:30.376 ************ 2025-05-05 01:13:50.039639 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-05 01:13:50.039653 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-05 01:13:50.039668 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-05 01:13:50.039681 | orchestrator | 2025-05-05 01:13:50.039696 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-05 01:13:50.039709 | orchestrator | Monday 05 May 2025 01:10:04 +0000 (0:00:01.358) 0:04:31.735 ************ 2025-05-05 01:13:50.039724 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-05 01:13:50.039737 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-05 01:13:50.039751 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-05 01:13:50.039765 | orchestrator | 2025-05-05 01:13:50.039779 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-05 01:13:50.039793 | orchestrator | Monday 05 May 2025 01:10:06 +0000 (0:00:01.549) 0:04:33.285 ************ 2025-05-05 01:13:50.039807 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-05 01:13:50.039820 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-05 01:13:50.039833 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-05 01:13:50.039861 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-05 01:13:50.039879 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-05 01:13:50.039892 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-05 01:13:50.039904 | orchestrator | 2025-05-05 01:13:50.039917 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-05 01:13:50.039930 | orchestrator | Monday 05 May 2025 01:10:11 +0000 (0:00:05.524) 0:04:38.809 ************ 2025-05-05 01:13:50.039942 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.039955 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.039968 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.039980 | orchestrator | 2025-05-05 01:13:50.039993 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-05 01:13:50.040006 | orchestrator | Monday 05 May 2025 01:10:12 +0000 (0:00:00.315) 0:04:39.125 ************ 2025-05-05 01:13:50.040018 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.040031 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.040043 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.040056 | orchestrator | 2025-05-05 01:13:50.040069 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-05 01:13:50.040081 | orchestrator | Monday 05 May 2025 01:10:12 +0000 (0:00:00.555) 0:04:39.680 ************ 2025-05-05 01:13:50.040094 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.040106 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.040119 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.040131 | orchestrator | 2025-05-05 01:13:50.040144 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-05 01:13:50.040156 | orchestrator | Monday 05 May 2025 01:10:14 +0000 (0:00:01.592) 0:04:41.272 ************ 2025-05-05 01:13:50.040169 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-05 01:13:50.040187 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-05 01:13:50.040201 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-05 01:13:50.040214 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-05 01:13:50.040227 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-05 01:13:50.040246 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-05 01:13:50.040259 | orchestrator | 2025-05-05 01:13:50.040295 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-05 01:13:50.040309 | orchestrator | Monday 05 May 2025 01:10:17 +0000 (0:00:03.533) 0:04:44.806 ************ 2025-05-05 01:13:50.040322 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-05 01:13:50.040335 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-05 01:13:50.040348 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-05 01:13:50.040361 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-05 01:13:50.040373 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.040386 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-05 01:13:50.040398 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.040411 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-05 01:13:50.040424 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.040444 | orchestrator | 2025-05-05 01:13:50.040457 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-05 01:13:50.040470 | orchestrator | Monday 05 May 2025 01:10:21 +0000 (0:00:03.495) 0:04:48.302 ************ 2025-05-05 01:13:50.040483 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.040497 | orchestrator | 2025-05-05 01:13:50.040510 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-05 01:13:50.040522 | orchestrator | Monday 05 May 2025 01:10:21 +0000 (0:00:00.122) 0:04:48.425 ************ 2025-05-05 01:13:50.040535 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.040549 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.040561 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.040574 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.040587 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.040599 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.040611 | orchestrator | 2025-05-05 01:13:50.040624 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-05 01:13:50.040636 | orchestrator | Monday 05 May 2025 01:10:22 +0000 (0:00:00.991) 0:04:49.417 ************ 2025-05-05 01:13:50.040649 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-05 01:13:50.040661 | orchestrator | 2025-05-05 01:13:50.040673 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-05 01:13:50.040686 | orchestrator | Monday 05 May 2025 01:10:22 +0000 (0:00:00.383) 0:04:49.800 ************ 2025-05-05 01:13:50.040699 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.040712 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.040724 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.040737 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.040749 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.040762 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.040774 | orchestrator | 2025-05-05 01:13:50.040786 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-05 01:13:50.040799 | orchestrator | Monday 05 May 2025 01:10:23 +0000 (0:00:01.051) 0:04:50.851 ************ 2025-05-05 01:13:50.040812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.040832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.040924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.040942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.040972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.040986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.041021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.041058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.041130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.041218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.041278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.041381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.041460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.041511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.041781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.041800 | orchestrator | 2025-05-05 01:13:50.041817 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-05 01:13:50.041833 | orchestrator | Monday 05 May 2025 01:10:27 +0000 (0:00:04.061) 0:04:54.913 ************ 2025-05-05 01:13:50.041872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.041904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.041938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.041978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.042092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.042133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.042144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.042155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.042166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.042190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.042243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.042254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.042264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.042275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.042286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.042338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.042355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.042367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.042387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.042399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.042428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.042532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.042550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.042593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.042622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.042659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.042670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.042784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.042806 | orchestrator | 2025-05-05 01:13:50.042816 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-05 01:13:50.042826 | orchestrator | Monday 05 May 2025 01:10:36 +0000 (0:00:08.457) 0:05:03.370 ************ 2025-05-05 01:13:50.042836 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.042866 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.042877 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.042887 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.042897 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.042907 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.042917 | orchestrator | 2025-05-05 01:13:50.042933 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-05 01:13:50.042944 | orchestrator | Monday 05 May 2025 01:10:38 +0000 (0:00:01.861) 0:05:05.232 ************ 2025-05-05 01:13:50.042954 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-05 01:13:50.042998 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-05 01:13:50.043009 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-05 01:13:50.043040 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-05 01:13:50.043052 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.043063 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-05 01:13:50.043073 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-05 01:13:50.043084 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.043094 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-05 01:13:50.043105 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.043115 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-05 01:13:50.043125 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-05 01:13:50.043140 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-05 01:13:50.043151 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-05 01:13:50.043161 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-05 01:13:50.043171 | orchestrator | 2025-05-05 01:13:50.043182 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-05 01:13:50.043192 | orchestrator | Monday 05 May 2025 01:10:43 +0000 (0:00:05.600) 0:05:10.833 ************ 2025-05-05 01:13:50.043202 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.043212 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.043223 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.043233 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.043243 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.043253 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.043263 | orchestrator | 2025-05-05 01:13:50.043274 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-05 01:13:50.043284 | orchestrator | Monday 05 May 2025 01:10:44 +0000 (0:00:00.931) 0:05:11.765 ************ 2025-05-05 01:13:50.043294 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-05 01:13:50.043305 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-05 01:13:50.043315 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-05 01:13:50.043326 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-05 01:13:50.043340 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-05 01:13:50.043351 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-05 01:13:50.043361 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-05 01:13:50.043371 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-05 01:13:50.043381 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.043398 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-05 01:13:50.043408 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.043419 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-05 01:13:50.043429 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-05 01:13:50.043439 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-05 01:13:50.043450 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.043460 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-05 01:13:50.043470 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-05 01:13:50.043480 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-05 01:13:50.043490 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-05 01:13:50.043501 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-05 01:13:50.043511 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-05 01:13:50.043521 | orchestrator | 2025-05-05 01:13:50.043532 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-05 01:13:50.043542 | orchestrator | Monday 05 May 2025 01:10:51 +0000 (0:00:07.010) 0:05:18.776 ************ 2025-05-05 01:13:50.043552 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-05 01:13:50.043580 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-05 01:13:50.043591 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-05 01:13:50.043602 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-05 01:13:50.043612 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-05 01:13:50.043623 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-05 01:13:50.043633 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-05 01:13:50.043643 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-05 01:13:50.043653 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-05 01:13:50.043663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-05 01:13:50.043674 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-05 01:13:50.043684 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-05 01:13:50.043694 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.043704 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-05 01:13:50.043715 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-05 01:13:50.043725 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.043735 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-05 01:13:50.043746 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.043756 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-05 01:13:50.043766 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-05 01:13:50.043782 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-05 01:13:50.043792 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-05 01:13:50.043802 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-05 01:13:50.043813 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-05 01:13:50.043823 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-05 01:13:50.043833 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-05 01:13:50.043888 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-05 01:13:50.043901 | orchestrator | 2025-05-05 01:13:50.043911 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-05 01:13:50.043922 | orchestrator | Monday 05 May 2025 01:11:01 +0000 (0:00:10.179) 0:05:28.956 ************ 2025-05-05 01:13:50.043932 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.043942 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.043952 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.043960 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.043969 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.043978 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.043993 | orchestrator | 2025-05-05 01:13:50.044010 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-05 01:13:50.044025 | orchestrator | Monday 05 May 2025 01:11:02 +0000 (0:00:00.718) 0:05:29.674 ************ 2025-05-05 01:13:50.044041 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.044055 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.044069 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.044085 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.044099 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.044115 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.044130 | orchestrator | 2025-05-05 01:13:50.044143 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-05 01:13:50.044156 | orchestrator | Monday 05 May 2025 01:11:03 +0000 (0:00:00.878) 0:05:30.553 ************ 2025-05-05 01:13:50.044166 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.044174 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.044183 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.044195 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.044203 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.044212 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.044220 | orchestrator | 2025-05-05 01:13:50.044229 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-05 01:13:50.044238 | orchestrator | Monday 05 May 2025 01:11:06 +0000 (0:00:03.002) 0:05:33.555 ************ 2025-05-05 01:13:50.044280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.044292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.044307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.044335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.044392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.044410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.044454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044494 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.044503 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.044512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.044521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.044530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.044590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044618 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.044627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.044651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.044661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.044689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.044742 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.044751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.044760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.044787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044829 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.044839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.044863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.044872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.044903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.044912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.044946 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.044955 | orchestrator | 2025-05-05 01:13:50.044964 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-05 01:13:50.044972 | orchestrator | Monday 05 May 2025 01:11:08 +0000 (0:00:01.742) 0:05:35.297 ************ 2025-05-05 01:13:50.044981 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-05 01:13:50.044990 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-05 01:13:50.044999 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.045007 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-05 01:13:50.045016 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-05 01:13:50.045029 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.045038 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-05 01:13:50.045047 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-05 01:13:50.045055 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.045064 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-05 01:13:50.045073 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-05 01:13:50.045081 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.045090 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-05 01:13:50.045099 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-05 01:13:50.045108 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.045116 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-05 01:13:50.045125 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-05 01:13:50.045133 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.045142 | orchestrator | 2025-05-05 01:13:50.045151 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-05 01:13:50.045160 | orchestrator | Monday 05 May 2025 01:11:08 +0000 (0:00:00.624) 0:05:35.922 ************ 2025-05-05 01:13:50.045173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.045183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.045198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.045207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.045221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-05 01:13:50.045243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-05 01:13:50.045259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.045315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.045361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.045406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.045453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.045484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-05 01:13:50.045525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-05 01:13:50.045543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-05 01:13:50.045799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-05 01:13:50.045808 | orchestrator | 2025-05-05 01:13:50.045817 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-05 01:13:50.045826 | orchestrator | Monday 05 May 2025 01:11:11 +0000 (0:00:02.906) 0:05:38.828 ************ 2025-05-05 01:13:50.045835 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.045886 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.045898 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.045907 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.045917 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.045926 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.045935 | orchestrator | 2025-05-05 01:13:50.045944 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-05 01:13:50.045953 | orchestrator | Monday 05 May 2025 01:11:12 +0000 (0:00:00.571) 0:05:39.400 ************ 2025-05-05 01:13:50.045962 | orchestrator | 2025-05-05 01:13:50.045972 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-05 01:13:50.045981 | orchestrator | Monday 05 May 2025 01:11:12 +0000 (0:00:00.202) 0:05:39.602 ************ 2025-05-05 01:13:50.045990 | orchestrator | 2025-05-05 01:13:50.045999 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-05 01:13:50.046008 | orchestrator | Monday 05 May 2025 01:11:12 +0000 (0:00:00.098) 0:05:39.700 ************ 2025-05-05 01:13:50.046040 | orchestrator | 2025-05-05 01:13:50.046050 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-05 01:13:50.046061 | orchestrator | Monday 05 May 2025 01:11:12 +0000 (0:00:00.192) 0:05:39.893 ************ 2025-05-05 01:13:50.046070 | orchestrator | 2025-05-05 01:13:50.046079 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-05 01:13:50.046088 | orchestrator | Monday 05 May 2025 01:11:12 +0000 (0:00:00.099) 0:05:39.992 ************ 2025-05-05 01:13:50.046098 | orchestrator | 2025-05-05 01:13:50.046107 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-05 01:13:50.046116 | orchestrator | Monday 05 May 2025 01:11:13 +0000 (0:00:00.187) 0:05:40.180 ************ 2025-05-05 01:13:50.046126 | orchestrator | 2025-05-05 01:13:50.046135 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-05 01:13:50.046144 | orchestrator | Monday 05 May 2025 01:11:13 +0000 (0:00:00.100) 0:05:40.280 ************ 2025-05-05 01:13:50.046153 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.046163 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.046172 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.046181 | orchestrator | 2025-05-05 01:13:50.046189 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-05 01:13:50.046198 | orchestrator | Monday 05 May 2025 01:11:25 +0000 (0:00:11.969) 0:05:52.250 ************ 2025-05-05 01:13:50.046211 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.046220 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.046229 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.046237 | orchestrator | 2025-05-05 01:13:50.046250 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-05 01:13:50.046260 | orchestrator | Monday 05 May 2025 01:11:41 +0000 (0:00:16.290) 0:06:08.541 ************ 2025-05-05 01:13:50.046268 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.046277 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.046285 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.046294 | orchestrator | 2025-05-05 01:13:50.046302 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-05 01:13:50.046311 | orchestrator | Monday 05 May 2025 01:11:57 +0000 (0:00:16.120) 0:06:24.661 ************ 2025-05-05 01:13:50.046319 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.046328 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.046336 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.046345 | orchestrator | 2025-05-05 01:13:50.046354 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-05 01:13:50.046362 | orchestrator | Monday 05 May 2025 01:12:22 +0000 (0:00:24.901) 0:06:49.563 ************ 2025-05-05 01:13:50.046371 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.046380 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.046388 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.046396 | orchestrator | 2025-05-05 01:13:50.046405 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-05 01:13:50.046417 | orchestrator | Monday 05 May 2025 01:12:23 +0000 (0:00:01.260) 0:06:50.823 ************ 2025-05-05 01:13:50.046426 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.046435 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.046443 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.046452 | orchestrator | 2025-05-05 01:13:50.046460 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-05 01:13:50.046469 | orchestrator | Monday 05 May 2025 01:12:24 +0000 (0:00:00.815) 0:06:51.638 ************ 2025-05-05 01:13:50.046478 | orchestrator | changed: [testbed-node-5] 2025-05-05 01:13:50.046486 | orchestrator | changed: [testbed-node-3] 2025-05-05 01:13:50.046495 | orchestrator | changed: [testbed-node-4] 2025-05-05 01:13:50.046503 | orchestrator | 2025-05-05 01:13:50.046512 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-05 01:13:50.046520 | orchestrator | Monday 05 May 2025 01:12:44 +0000 (0:00:19.667) 0:07:11.306 ************ 2025-05-05 01:13:50.046529 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.046538 | orchestrator | 2025-05-05 01:13:50.046546 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-05 01:13:50.046555 | orchestrator | Monday 05 May 2025 01:12:44 +0000 (0:00:00.129) 0:07:11.435 ************ 2025-05-05 01:13:50.046564 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.046572 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.046581 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.046589 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.046598 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.046606 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-05 01:13:50.046615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-05 01:13:50.046624 | orchestrator | 2025-05-05 01:13:50.046633 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-05 01:13:50.046641 | orchestrator | Monday 05 May 2025 01:13:06 +0000 (0:00:21.749) 0:07:33.184 ************ 2025-05-05 01:13:50.046650 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.046658 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.046667 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.046682 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.046695 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.046703 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.046712 | orchestrator | 2025-05-05 01:13:50.046721 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-05 01:13:50.046729 | orchestrator | Monday 05 May 2025 01:13:16 +0000 (0:00:10.530) 0:07:43.714 ************ 2025-05-05 01:13:50.046738 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.046746 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.046755 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.046763 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.046772 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.046780 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-05 01:13:50.046789 | orchestrator | 2025-05-05 01:13:50.046797 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-05 01:13:50.046806 | orchestrator | Monday 05 May 2025 01:13:19 +0000 (0:00:03.317) 0:07:47.032 ************ 2025-05-05 01:13:50.046814 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-05 01:13:50.046823 | orchestrator | 2025-05-05 01:13:50.046831 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-05 01:13:50.046840 | orchestrator | Monday 05 May 2025 01:13:30 +0000 (0:00:10.476) 0:07:57.509 ************ 2025-05-05 01:13:50.046861 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-05 01:13:50.046869 | orchestrator | 2025-05-05 01:13:50.046878 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-05 01:13:50.046885 | orchestrator | Monday 05 May 2025 01:13:31 +0000 (0:00:01.111) 0:07:58.621 ************ 2025-05-05 01:13:50.046894 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.046902 | orchestrator | 2025-05-05 01:13:50.046910 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-05 01:13:50.046917 | orchestrator | Monday 05 May 2025 01:13:32 +0000 (0:00:01.426) 0:08:00.047 ************ 2025-05-05 01:13:50.046926 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-05 01:13:50.046934 | orchestrator | 2025-05-05 01:13:50.046942 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-05 01:13:50.046950 | orchestrator | Monday 05 May 2025 01:13:41 +0000 (0:00:08.758) 0:08:08.806 ************ 2025-05-05 01:13:50.046958 | orchestrator | ok: [testbed-node-3] 2025-05-05 01:13:50.046966 | orchestrator | ok: [testbed-node-4] 2025-05-05 01:13:50.046974 | orchestrator | ok: [testbed-node-5] 2025-05-05 01:13:50.046982 | orchestrator | ok: [testbed-node-0] 2025-05-05 01:13:50.046993 | orchestrator | ok: [testbed-node-1] 2025-05-05 01:13:50.047001 | orchestrator | ok: [testbed-node-2] 2025-05-05 01:13:50.047009 | orchestrator | 2025-05-05 01:13:50.047017 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-05 01:13:50.047025 | orchestrator | 2025-05-05 01:13:50.047034 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-05 01:13:50.047042 | orchestrator | Monday 05 May 2025 01:13:43 +0000 (0:00:02.224) 0:08:11.031 ************ 2025-05-05 01:13:50.047050 | orchestrator | changed: [testbed-node-0] 2025-05-05 01:13:50.047058 | orchestrator | changed: [testbed-node-1] 2025-05-05 01:13:50.047066 | orchestrator | changed: [testbed-node-2] 2025-05-05 01:13:50.047074 | orchestrator | 2025-05-05 01:13:50.047082 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-05 01:13:50.047090 | orchestrator | 2025-05-05 01:13:50.047098 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-05 01:13:50.047106 | orchestrator | Monday 05 May 2025 01:13:45 +0000 (0:00:01.036) 0:08:12.067 ************ 2025-05-05 01:13:50.047114 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.047122 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.047130 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.047138 | orchestrator | 2025-05-05 01:13:50.047146 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-05 01:13:50.047158 | orchestrator | 2025-05-05 01:13:50.047167 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-05 01:13:50.047175 | orchestrator | Monday 05 May 2025 01:13:45 +0000 (0:00:00.783) 0:08:12.851 ************ 2025-05-05 01:13:50.047183 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-05 01:13:50.047191 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-05 01:13:50.047199 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-05 01:13:50.047207 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-05 01:13:50.047215 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-05 01:13:50.047223 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-05 01:13:50.047231 | orchestrator | skipping: [testbed-node-3] 2025-05-05 01:13:50.047239 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-05 01:13:50.047247 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-05 01:13:50.047255 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-05 01:13:50.047263 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-05 01:13:50.047271 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-05 01:13:50.047279 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-05 01:13:50.047287 | orchestrator | skipping: [testbed-node-4] 2025-05-05 01:13:50.047295 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-05 01:13:50.047303 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-05 01:13:50.047311 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-05 01:13:50.047324 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-05 01:13:50.047337 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-05 01:13:50.047351 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-05 01:13:50.047363 | orchestrator | skipping: [testbed-node-5] 2025-05-05 01:13:50.047376 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-05 01:13:50.047389 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-05 01:13:50.047402 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-05 01:13:50.047414 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-05 01:13:50.047427 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-05 01:13:50.047440 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-05 01:13:50.047453 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.047466 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-05 01:13:50.047480 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-05 01:13:50.047489 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-05 01:13:50.047497 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-05 01:13:50.047505 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-05 01:13:50.047513 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-05 01:13:50.047521 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:50.047529 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-05 01:13:50.047537 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-05 01:13:50.047545 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-05 01:13:50.047553 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-05 01:13:50.047561 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-05 01:13:50.047569 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-05 01:13:50.047582 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:50.047590 | orchestrator | 2025-05-05 01:13:50.047598 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-05 01:13:50.047606 | orchestrator | 2025-05-05 01:13:50.047618 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-05 01:13:50.047626 | orchestrator | Monday 05 May 2025 01:13:47 +0000 (0:00:01.403) 0:08:14.254 ************ 2025-05-05 01:13:50.047634 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-05 01:13:50.047642 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-05 01:13:50.047650 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:50.047658 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-05 01:13:50.047671 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-05 01:13:53.065742 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:53.065950 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-05 01:13:53.065976 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-05 01:13:53.065992 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:53.066007 | orchestrator | 2025-05-05 01:13:53.066082 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-05 01:13:53.066100 | orchestrator | 2025-05-05 01:13:53.066114 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-05 01:13:53.066129 | orchestrator | Monday 05 May 2025 01:13:47 +0000 (0:00:00.595) 0:08:14.850 ************ 2025-05-05 01:13:53.066143 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:53.066157 | orchestrator | 2025-05-05 01:13:53.066172 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-05 01:13:53.066186 | orchestrator | 2025-05-05 01:13:53.066200 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-05 01:13:53.066214 | orchestrator | Monday 05 May 2025 01:13:48 +0000 (0:00:00.990) 0:08:15.840 ************ 2025-05-05 01:13:53.066228 | orchestrator | skipping: [testbed-node-0] 2025-05-05 01:13:53.066243 | orchestrator | skipping: [testbed-node-1] 2025-05-05 01:13:53.066257 | orchestrator | skipping: [testbed-node-2] 2025-05-05 01:13:53.066273 | orchestrator | 2025-05-05 01:13:53.066289 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-05 01:13:53.066305 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-05 01:13:53.066326 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-05 01:13:53.066343 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-05 01:13:53.066360 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-05 01:13:53.066376 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-05 01:13:53.066392 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-05 01:13:53.066408 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-05 01:13:53.066424 | orchestrator | 2025-05-05 01:13:53.066440 | orchestrator | 2025-05-05 01:13:53.066455 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-05 01:13:53.066472 | orchestrator | Monday 05 May 2025 01:13:49 +0000 (0:00:00.633) 0:08:16.474 ************ 2025-05-05 01:13:53.066486 | orchestrator | =============================================================================== 2025-05-05 01:13:53.066533 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.05s 2025-05-05 01:13:53.066548 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.90s 2025-05-05 01:13:53.066562 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.75s 2025-05-05 01:13:53.066576 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.75s 2025-05-05 01:13:53.066590 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.67s 2025-05-05 01:13:53.066605 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.63s 2025-05-05 01:13:53.066618 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.29s 2025-05-05 01:13:53.066633 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.12s 2025-05-05 01:13:53.066647 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.44s 2025-05-05 01:13:53.066661 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.04s 2025-05-05 01:13:53.066675 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.97s 2025-05-05 01:13:53.066689 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.64s 2025-05-05 01:13:53.066703 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.55s 2025-05-05 01:13:53.066717 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.53s 2025-05-05 01:13:53.066731 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.48s 2025-05-05 01:13:53.066745 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.45s 2025-05-05 01:13:53.066760 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.18s 2025-05-05 01:13:53.066774 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.86s 2025-05-05 01:13:53.066788 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 8.76s 2025-05-05 01:13:53.066802 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.53s 2025-05-05 01:13:53.066880 | orchestrator | 2025-05-05 01:13:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:56.116552 | orchestrator | 2025-05-05 01:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:56.116697 | orchestrator | 2025-05-05 01:13:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:13:59.171291 | orchestrator | 2025-05-05 01:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:13:59.171433 | orchestrator | 2025-05-05 01:13:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:02.223472 | orchestrator | 2025-05-05 01:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:02.223618 | orchestrator | 2025-05-05 01:14:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:05.273431 | orchestrator | 2025-05-05 01:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:05.273576 | orchestrator | 2025-05-05 01:14:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:08.325398 | orchestrator | 2025-05-05 01:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:08.325521 | orchestrator | 2025-05-05 01:14:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:11.382468 | orchestrator | 2025-05-05 01:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:11.382611 | orchestrator | 2025-05-05 01:14:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:14.431179 | orchestrator | 2025-05-05 01:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:14.431356 | orchestrator | 2025-05-05 01:14:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:14.431786 | orchestrator | 2025-05-05 01:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:17.476063 | orchestrator | 2025-05-05 01:14:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:20.522526 | orchestrator | 2025-05-05 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:20.522677 | orchestrator | 2025-05-05 01:14:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:23.568316 | orchestrator | 2025-05-05 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:23.568471 | orchestrator | 2025-05-05 01:14:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:26.621498 | orchestrator | 2025-05-05 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:26.621673 | orchestrator | 2025-05-05 01:14:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:29.673454 | orchestrator | 2025-05-05 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:29.673593 | orchestrator | 2025-05-05 01:14:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:32.717419 | orchestrator | 2025-05-05 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:32.717562 | orchestrator | 2025-05-05 01:14:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:35.766120 | orchestrator | 2025-05-05 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:35.766278 | orchestrator | 2025-05-05 01:14:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:38.809231 | orchestrator | 2025-05-05 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:38.809376 | orchestrator | 2025-05-05 01:14:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:41.862510 | orchestrator | 2025-05-05 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:41.862650 | orchestrator | 2025-05-05 01:14:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:44.912009 | orchestrator | 2025-05-05 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:44.912167 | orchestrator | 2025-05-05 01:14:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:47.958487 | orchestrator | 2025-05-05 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:47.958693 | orchestrator | 2025-05-05 01:14:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:51.017124 | orchestrator | 2025-05-05 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:51.017255 | orchestrator | 2025-05-05 01:14:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:54.057679 | orchestrator | 2025-05-05 01:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:54.057841 | orchestrator | 2025-05-05 01:14:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:14:57.108388 | orchestrator | 2025-05-05 01:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:14:57.108540 | orchestrator | 2025-05-05 01:14:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:00.153217 | orchestrator | 2025-05-05 01:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:00.153392 | orchestrator | 2025-05-05 01:15:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:03.195355 | orchestrator | 2025-05-05 01:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:03.195499 | orchestrator | 2025-05-05 01:15:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:06.247101 | orchestrator | 2025-05-05 01:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:06.247242 | orchestrator | 2025-05-05 01:15:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:09.298310 | orchestrator | 2025-05-05 01:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:09.298463 | orchestrator | 2025-05-05 01:15:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:12.342228 | orchestrator | 2025-05-05 01:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:12.342376 | orchestrator | 2025-05-05 01:15:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:15.388859 | orchestrator | 2025-05-05 01:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:15.388993 | orchestrator | 2025-05-05 01:15:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:18.434596 | orchestrator | 2025-05-05 01:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:18.434749 | orchestrator | 2025-05-05 01:15:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:21.482747 | orchestrator | 2025-05-05 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:21.482998 | orchestrator | 2025-05-05 01:15:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:24.527429 | orchestrator | 2025-05-05 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:24.527574 | orchestrator | 2025-05-05 01:15:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:27.579576 | orchestrator | 2025-05-05 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:27.579727 | orchestrator | 2025-05-05 01:15:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:30.618709 | orchestrator | 2025-05-05 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:30.618910 | orchestrator | 2025-05-05 01:15:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:33.664658 | orchestrator | 2025-05-05 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:33.664871 | orchestrator | 2025-05-05 01:15:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:36.714309 | orchestrator | 2025-05-05 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:36.714487 | orchestrator | 2025-05-05 01:15:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:39.755518 | orchestrator | 2025-05-05 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:39.755675 | orchestrator | 2025-05-05 01:15:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:42.801505 | orchestrator | 2025-05-05 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:42.801648 | orchestrator | 2025-05-05 01:15:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:45.845909 | orchestrator | 2025-05-05 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:45.846176 | orchestrator | 2025-05-05 01:15:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:48.892460 | orchestrator | 2025-05-05 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:48.892625 | orchestrator | 2025-05-05 01:15:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:51.937065 | orchestrator | 2025-05-05 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:51.937198 | orchestrator | 2025-05-05 01:15:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:54.991251 | orchestrator | 2025-05-05 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:54.991401 | orchestrator | 2025-05-05 01:15:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:15:58.037249 | orchestrator | 2025-05-05 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:15:58.037390 | orchestrator | 2025-05-05 01:15:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:01.077656 | orchestrator | 2025-05-05 01:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:01.077876 | orchestrator | 2025-05-05 01:16:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:04.120933 | orchestrator | 2025-05-05 01:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:04.121058 | orchestrator | 2025-05-05 01:16:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:07.174558 | orchestrator | 2025-05-05 01:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:07.174698 | orchestrator | 2025-05-05 01:16:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:10.220348 | orchestrator | 2025-05-05 01:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:10.220543 | orchestrator | 2025-05-05 01:16:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:13.262969 | orchestrator | 2025-05-05 01:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:13.263108 | orchestrator | 2025-05-05 01:16:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:16.314377 | orchestrator | 2025-05-05 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:16.314522 | orchestrator | 2025-05-05 01:16:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:19.361524 | orchestrator | 2025-05-05 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:19.361668 | orchestrator | 2025-05-05 01:16:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:22.411141 | orchestrator | 2025-05-05 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:22.411290 | orchestrator | 2025-05-05 01:16:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:25.460103 | orchestrator | 2025-05-05 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:25.460200 | orchestrator | 2025-05-05 01:16:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:28.511263 | orchestrator | 2025-05-05 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:28.511377 | orchestrator | 2025-05-05 01:16:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:31.555956 | orchestrator | 2025-05-05 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:31.556133 | orchestrator | 2025-05-05 01:16:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:34.598942 | orchestrator | 2025-05-05 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:34.599097 | orchestrator | 2025-05-05 01:16:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:37.652391 | orchestrator | 2025-05-05 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:37.652550 | orchestrator | 2025-05-05 01:16:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:40.708541 | orchestrator | 2025-05-05 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:40.708692 | orchestrator | 2025-05-05 01:16:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:43.756378 | orchestrator | 2025-05-05 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:43.756519 | orchestrator | 2025-05-05 01:16:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:46.811423 | orchestrator | 2025-05-05 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:46.811563 | orchestrator | 2025-05-05 01:16:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:49.859744 | orchestrator | 2025-05-05 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:49.859892 | orchestrator | 2025-05-05 01:16:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:52.903757 | orchestrator | 2025-05-05 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:52.903895 | orchestrator | 2025-05-05 01:16:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:55.944335 | orchestrator | 2025-05-05 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:55.944476 | orchestrator | 2025-05-05 01:16:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:16:58.995743 | orchestrator | 2025-05-05 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:16:58.995995 | orchestrator | 2025-05-05 01:16:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:02.046917 | orchestrator | 2025-05-05 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:02.047094 | orchestrator | 2025-05-05 01:17:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:05.094161 | orchestrator | 2025-05-05 01:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:05.094332 | orchestrator | 2025-05-05 01:17:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:08.140753 | orchestrator | 2025-05-05 01:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:08.141012 | orchestrator | 2025-05-05 01:17:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:11.186881 | orchestrator | 2025-05-05 01:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:11.187063 | orchestrator | 2025-05-05 01:17:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:14.234480 | orchestrator | 2025-05-05 01:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:14.234655 | orchestrator | 2025-05-05 01:17:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:17.280965 | orchestrator | 2025-05-05 01:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:17.281192 | orchestrator | 2025-05-05 01:17:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:20.329297 | orchestrator | 2025-05-05 01:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:20.329474 | orchestrator | 2025-05-05 01:17:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:23.391829 | orchestrator | 2025-05-05 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:23.392010 | orchestrator | 2025-05-05 01:17:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:26.432217 | orchestrator | 2025-05-05 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:26.432383 | orchestrator | 2025-05-05 01:17:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:29.487963 | orchestrator | 2025-05-05 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:29.488143 | orchestrator | 2025-05-05 01:17:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:32.529541 | orchestrator | 2025-05-05 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:32.529716 | orchestrator | 2025-05-05 01:17:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:35.580331 | orchestrator | 2025-05-05 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:35.580477 | orchestrator | 2025-05-05 01:17:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:38.633481 | orchestrator | 2025-05-05 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:38.633626 | orchestrator | 2025-05-05 01:17:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:41.683601 | orchestrator | 2025-05-05 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:41.683752 | orchestrator | 2025-05-05 01:17:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:44.729517 | orchestrator | 2025-05-05 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:44.729661 | orchestrator | 2025-05-05 01:17:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:47.774547 | orchestrator | 2025-05-05 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:47.774695 | orchestrator | 2025-05-05 01:17:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:50.817416 | orchestrator | 2025-05-05 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:50.817553 | orchestrator | 2025-05-05 01:17:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:53.871404 | orchestrator | 2025-05-05 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:53.871550 | orchestrator | 2025-05-05 01:17:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:56.919933 | orchestrator | 2025-05-05 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:56.920098 | orchestrator | 2025-05-05 01:17:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:17:59.963065 | orchestrator | 2025-05-05 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:17:59.963213 | orchestrator | 2025-05-05 01:17:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:03.007421 | orchestrator | 2025-05-05 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:03.007597 | orchestrator | 2025-05-05 01:18:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:06.057675 | orchestrator | 2025-05-05 01:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:06.057894 | orchestrator | 2025-05-05 01:18:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:09.107265 | orchestrator | 2025-05-05 01:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:09.107394 | orchestrator | 2025-05-05 01:18:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:12.156684 | orchestrator | 2025-05-05 01:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:12.156881 | orchestrator | 2025-05-05 01:18:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:15.201374 | orchestrator | 2025-05-05 01:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:15.201515 | orchestrator | 2025-05-05 01:18:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:18.250128 | orchestrator | 2025-05-05 01:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:18.250274 | orchestrator | 2025-05-05 01:18:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:21.293609 | orchestrator | 2025-05-05 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:21.293825 | orchestrator | 2025-05-05 01:18:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:24.343356 | orchestrator | 2025-05-05 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:24.343514 | orchestrator | 2025-05-05 01:18:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:27.382671 | orchestrator | 2025-05-05 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:27.382874 | orchestrator | 2025-05-05 01:18:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:30.432976 | orchestrator | 2025-05-05 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:30.433151 | orchestrator | 2025-05-05 01:18:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:33.479194 | orchestrator | 2025-05-05 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:33.479361 | orchestrator | 2025-05-05 01:18:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:36.523362 | orchestrator | 2025-05-05 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:36.523488 | orchestrator | 2025-05-05 01:18:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:39.572324 | orchestrator | 2025-05-05 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:39.572469 | orchestrator | 2025-05-05 01:18:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:42.620403 | orchestrator | 2025-05-05 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:42.620571 | orchestrator | 2025-05-05 01:18:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:45.668855 | orchestrator | 2025-05-05 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:45.669001 | orchestrator | 2025-05-05 01:18:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:48.717476 | orchestrator | 2025-05-05 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:48.717657 | orchestrator | 2025-05-05 01:18:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:51.765521 | orchestrator | 2025-05-05 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:51.765668 | orchestrator | 2025-05-05 01:18:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:54.809709 | orchestrator | 2025-05-05 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:54.809921 | orchestrator | 2025-05-05 01:18:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:18:57.859406 | orchestrator | 2025-05-05 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:18:57.859544 | orchestrator | 2025-05-05 01:18:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:00.909265 | orchestrator | 2025-05-05 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:00.909411 | orchestrator | 2025-05-05 01:19:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:03.955699 | orchestrator | 2025-05-05 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:03.955919 | orchestrator | 2025-05-05 01:19:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:07.012866 | orchestrator | 2025-05-05 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:07.013012 | orchestrator | 2025-05-05 01:19:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:10.061286 | orchestrator | 2025-05-05 01:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:10.061425 | orchestrator | 2025-05-05 01:19:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:13.108699 | orchestrator | 2025-05-05 01:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:13.108887 | orchestrator | 2025-05-05 01:19:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:16.156374 | orchestrator | 2025-05-05 01:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:16.156517 | orchestrator | 2025-05-05 01:19:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:19.203372 | orchestrator | 2025-05-05 01:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:19.203517 | orchestrator | 2025-05-05 01:19:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:22.251201 | orchestrator | 2025-05-05 01:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:22.251338 | orchestrator | 2025-05-05 01:19:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:25.301141 | orchestrator | 2025-05-05 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:25.301317 | orchestrator | 2025-05-05 01:19:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:28.351688 | orchestrator | 2025-05-05 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:28.351870 | orchestrator | 2025-05-05 01:19:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:31.399261 | orchestrator | 2025-05-05 01:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:31.399407 | orchestrator | 2025-05-05 01:19:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:34.446499 | orchestrator | 2025-05-05 01:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:34.446672 | orchestrator | 2025-05-05 01:19:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:37.489104 | orchestrator | 2025-05-05 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:37.489270 | orchestrator | 2025-05-05 01:19:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:40.532291 | orchestrator | 2025-05-05 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:40.532462 | orchestrator | 2025-05-05 01:19:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:43.582580 | orchestrator | 2025-05-05 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:43.582736 | orchestrator | 2025-05-05 01:19:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:46.631521 | orchestrator | 2025-05-05 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:46.631655 | orchestrator | 2025-05-05 01:19:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:49.681354 | orchestrator | 2025-05-05 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:49.681520 | orchestrator | 2025-05-05 01:19:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:52.732303 | orchestrator | 2025-05-05 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:52.732439 | orchestrator | 2025-05-05 01:19:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:52.732801 | orchestrator | 2025-05-05 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:55.776724 | orchestrator | 2025-05-05 01:19:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:19:58.826604 | orchestrator | 2025-05-05 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:19:58.826769 | orchestrator | 2025-05-05 01:19:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:01.868523 | orchestrator | 2025-05-05 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:01.868677 | orchestrator | 2025-05-05 01:20:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:04.915148 | orchestrator | 2025-05-05 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:04.915313 | orchestrator | 2025-05-05 01:20:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:07.966643 | orchestrator | 2025-05-05 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:07.966822 | orchestrator | 2025-05-05 01:20:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:07.967373 | orchestrator | 2025-05-05 01:20:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:11.024153 | orchestrator | 2025-05-05 01:20:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:14.063087 | orchestrator | 2025-05-05 01:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:14.063267 | orchestrator | 2025-05-05 01:20:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:17.107323 | orchestrator | 2025-05-05 01:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:17.107526 | orchestrator | 2025-05-05 01:20:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:20.158530 | orchestrator | 2025-05-05 01:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:20.158732 | orchestrator | 2025-05-05 01:20:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:23.209934 | orchestrator | 2025-05-05 01:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:23.210201 | orchestrator | 2025-05-05 01:20:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:26.254549 | orchestrator | 2025-05-05 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:26.254729 | orchestrator | 2025-05-05 01:20:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:29.304667 | orchestrator | 2025-05-05 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:29.304834 | orchestrator | 2025-05-05 01:20:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:32.351936 | orchestrator | 2025-05-05 01:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:32.352139 | orchestrator | 2025-05-05 01:20:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:35.399531 | orchestrator | 2025-05-05 01:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:35.399711 | orchestrator | 2025-05-05 01:20:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:38.449691 | orchestrator | 2025-05-05 01:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:38.449932 | orchestrator | 2025-05-05 01:20:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:41.498379 | orchestrator | 2025-05-05 01:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:41.498551 | orchestrator | 2025-05-05 01:20:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:44.549397 | orchestrator | 2025-05-05 01:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:44.549577 | orchestrator | 2025-05-05 01:20:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:47.595270 | orchestrator | 2025-05-05 01:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:47.595421 | orchestrator | 2025-05-05 01:20:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:50.649220 | orchestrator | 2025-05-05 01:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:50.650228 | orchestrator | 2025-05-05 01:20:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:53.712853 | orchestrator | 2025-05-05 01:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:53.713055 | orchestrator | 2025-05-05 01:20:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:56.752442 | orchestrator | 2025-05-05 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:56.752988 | orchestrator | 2025-05-05 01:20:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:20:59.799359 | orchestrator | 2025-05-05 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:20:59.799485 | orchestrator | 2025-05-05 01:20:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:02.839663 | orchestrator | 2025-05-05 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:02.839802 | orchestrator | 2025-05-05 01:21:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:05.889021 | orchestrator | 2025-05-05 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:05.889172 | orchestrator | 2025-05-05 01:21:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:08.949131 | orchestrator | 2025-05-05 01:21:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:08.949273 | orchestrator | 2025-05-05 01:21:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:12.001834 | orchestrator | 2025-05-05 01:21:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:12.002088 | orchestrator | 2025-05-05 01:21:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:15.058083 | orchestrator | 2025-05-05 01:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:15.058256 | orchestrator | 2025-05-05 01:21:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:18.106459 | orchestrator | 2025-05-05 01:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:18.106644 | orchestrator | 2025-05-05 01:21:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:21.153609 | orchestrator | 2025-05-05 01:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:21.153777 | orchestrator | 2025-05-05 01:21:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:24.202366 | orchestrator | 2025-05-05 01:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:24.202546 | orchestrator | 2025-05-05 01:21:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:24.203154 | orchestrator | 2025-05-05 01:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:27.252942 | orchestrator | 2025-05-05 01:21:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:30.307035 | orchestrator | 2025-05-05 01:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:30.307178 | orchestrator | 2025-05-05 01:21:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:33.345874 | orchestrator | 2025-05-05 01:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:33.346191 | orchestrator | 2025-05-05 01:21:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:36.405430 | orchestrator | 2025-05-05 01:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:36.405575 | orchestrator | 2025-05-05 01:21:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:39.455891 | orchestrator | 2025-05-05 01:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:39.456086 | orchestrator | 2025-05-05 01:21:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:42.505822 | orchestrator | 2025-05-05 01:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:42.506069 | orchestrator | 2025-05-05 01:21:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:45.552618 | orchestrator | 2025-05-05 01:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:45.552757 | orchestrator | 2025-05-05 01:21:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:48.593039 | orchestrator | 2025-05-05 01:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:48.593171 | orchestrator | 2025-05-05 01:21:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:51.637179 | orchestrator | 2025-05-05 01:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:51.637318 | orchestrator | 2025-05-05 01:21:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:54.689561 | orchestrator | 2025-05-05 01:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:54.689708 | orchestrator | 2025-05-05 01:21:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:21:57.743554 | orchestrator | 2025-05-05 01:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:21:57.743661 | orchestrator | 2025-05-05 01:21:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:00.792201 | orchestrator | 2025-05-05 01:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:00.792380 | orchestrator | 2025-05-05 01:22:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:03.839570 | orchestrator | 2025-05-05 01:22:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:03.839711 | orchestrator | 2025-05-05 01:22:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:06.882734 | orchestrator | 2025-05-05 01:22:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:06.882881 | orchestrator | 2025-05-05 01:22:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:09.932720 | orchestrator | 2025-05-05 01:22:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:09.932866 | orchestrator | 2025-05-05 01:22:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:12.983414 | orchestrator | 2025-05-05 01:22:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:12.983555 | orchestrator | 2025-05-05 01:22:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:12.984500 | orchestrator | 2025-05-05 01:22:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:16.035536 | orchestrator | 2025-05-05 01:22:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:19.087042 | orchestrator | 2025-05-05 01:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:19.087217 | orchestrator | 2025-05-05 01:22:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:22.128345 | orchestrator | 2025-05-05 01:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:22.128490 | orchestrator | 2025-05-05 01:22:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:25.170831 | orchestrator | 2025-05-05 01:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:25.171024 | orchestrator | 2025-05-05 01:22:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:28.220111 | orchestrator | 2025-05-05 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:28.220238 | orchestrator | 2025-05-05 01:22:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:31.277111 | orchestrator | 2025-05-05 01:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:31.277259 | orchestrator | 2025-05-05 01:22:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:34.327912 | orchestrator | 2025-05-05 01:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:34.328099 | orchestrator | 2025-05-05 01:22:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:37.386800 | orchestrator | 2025-05-05 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:37.386946 | orchestrator | 2025-05-05 01:22:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:40.437819 | orchestrator | 2025-05-05 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:40.438088 | orchestrator | 2025-05-05 01:22:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:43.484398 | orchestrator | 2025-05-05 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:43.484549 | orchestrator | 2025-05-05 01:22:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:46.532409 | orchestrator | 2025-05-05 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:46.532550 | orchestrator | 2025-05-05 01:22:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:49.579110 | orchestrator | 2025-05-05 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:49.579260 | orchestrator | 2025-05-05 01:22:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:52.623843 | orchestrator | 2025-05-05 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:52.624034 | orchestrator | 2025-05-05 01:22:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:55.671403 | orchestrator | 2025-05-05 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:55.671547 | orchestrator | 2025-05-05 01:22:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:22:58.719403 | orchestrator | 2025-05-05 01:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:22:58.719543 | orchestrator | 2025-05-05 01:22:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:01.760064 | orchestrator | 2025-05-05 01:22:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:01.760190 | orchestrator | 2025-05-05 01:23:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:01.761655 | orchestrator | 2025-05-05 01:23:01 | INFO  | Task 6deb4f54-59e3-4a2a-93c9-cf4caa66cdc6 is in state STARTED 2025-05-05 01:23:04.805312 | orchestrator | 2025-05-05 01:23:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:04.805457 | orchestrator | 2025-05-05 01:23:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:04.806633 | orchestrator | 2025-05-05 01:23:04 | INFO  | Task 6deb4f54-59e3-4a2a-93c9-cf4caa66cdc6 is in state STARTED 2025-05-05 01:23:07.860570 | orchestrator | 2025-05-05 01:23:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:07.860716 | orchestrator | 2025-05-05 01:23:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:07.861200 | orchestrator | 2025-05-05 01:23:07 | INFO  | Task 6deb4f54-59e3-4a2a-93c9-cf4caa66cdc6 is in state STARTED 2025-05-05 01:23:10.916645 | orchestrator | 2025-05-05 01:23:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:10.916789 | orchestrator | 2025-05-05 01:23:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:10.917978 | orchestrator | 2025-05-05 01:23:10 | INFO  | Task 6deb4f54-59e3-4a2a-93c9-cf4caa66cdc6 is in state STARTED 2025-05-05 01:23:13.971848 | orchestrator | 2025-05-05 01:23:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:13.971984 | orchestrator | 2025-05-05 01:23:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:13.972591 | orchestrator | 2025-05-05 01:23:13 | INFO  | Task 6deb4f54-59e3-4a2a-93c9-cf4caa66cdc6 is in state SUCCESS 2025-05-05 01:23:13.972787 | orchestrator | 2025-05-05 01:23:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:17.027315 | orchestrator | 2025-05-05 01:23:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:20.074202 | orchestrator | 2025-05-05 01:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:20.074353 | orchestrator | 2025-05-05 01:23:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:23.114745 | orchestrator | 2025-05-05 01:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:23.114877 | orchestrator | 2025-05-05 01:23:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:26.160165 | orchestrator | 2025-05-05 01:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:26.160317 | orchestrator | 2025-05-05 01:23:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:29.208136 | orchestrator | 2025-05-05 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:29.208282 | orchestrator | 2025-05-05 01:23:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:32.254122 | orchestrator | 2025-05-05 01:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:32.254270 | orchestrator | 2025-05-05 01:23:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:35.299552 | orchestrator | 2025-05-05 01:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:35.299689 | orchestrator | 2025-05-05 01:23:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:38.349910 | orchestrator | 2025-05-05 01:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:38.350158 | orchestrator | 2025-05-05 01:23:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:41.394747 | orchestrator | 2025-05-05 01:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:41.394896 | orchestrator | 2025-05-05 01:23:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:44.438793 | orchestrator | 2025-05-05 01:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:44.438946 | orchestrator | 2025-05-05 01:23:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:47.479718 | orchestrator | 2025-05-05 01:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:47.479856 | orchestrator | 2025-05-05 01:23:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:50.531175 | orchestrator | 2025-05-05 01:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:50.531327 | orchestrator | 2025-05-05 01:23:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:53.578393 | orchestrator | 2025-05-05 01:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:53.578532 | orchestrator | 2025-05-05 01:23:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:56.624218 | orchestrator | 2025-05-05 01:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:56.624365 | orchestrator | 2025-05-05 01:23:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:23:59.673928 | orchestrator | 2025-05-05 01:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:23:59.674106 | orchestrator | 2025-05-05 01:23:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:02.722942 | orchestrator | 2025-05-05 01:23:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:02.723127 | orchestrator | 2025-05-05 01:24:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:05.773263 | orchestrator | 2025-05-05 01:24:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:05.773387 | orchestrator | 2025-05-05 01:24:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:08.823112 | orchestrator | 2025-05-05 01:24:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:08.823273 | orchestrator | 2025-05-05 01:24:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:11.874928 | orchestrator | 2025-05-05 01:24:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:11.875122 | orchestrator | 2025-05-05 01:24:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:14.918984 | orchestrator | 2025-05-05 01:24:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:14.919214 | orchestrator | 2025-05-05 01:24:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:17.964963 | orchestrator | 2025-05-05 01:24:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:17.965163 | orchestrator | 2025-05-05 01:24:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:21.007467 | orchestrator | 2025-05-05 01:24:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:21.007638 | orchestrator | 2025-05-05 01:24:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:24.052276 | orchestrator | 2025-05-05 01:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:24.052412 | orchestrator | 2025-05-05 01:24:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:27.101825 | orchestrator | 2025-05-05 01:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:27.101956 | orchestrator | 2025-05-05 01:24:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:30.145579 | orchestrator | 2025-05-05 01:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:30.145728 | orchestrator | 2025-05-05 01:24:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:33.197524 | orchestrator | 2025-05-05 01:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:33.197661 | orchestrator | 2025-05-05 01:24:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:36.248626 | orchestrator | 2025-05-05 01:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:36.248763 | orchestrator | 2025-05-05 01:24:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:39.295460 | orchestrator | 2025-05-05 01:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:39.295606 | orchestrator | 2025-05-05 01:24:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:42.343322 | orchestrator | 2025-05-05 01:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:42.343473 | orchestrator | 2025-05-05 01:24:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:45.390356 | orchestrator | 2025-05-05 01:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:45.390523 | orchestrator | 2025-05-05 01:24:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:48.438210 | orchestrator | 2025-05-05 01:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:48.438355 | orchestrator | 2025-05-05 01:24:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:51.488405 | orchestrator | 2025-05-05 01:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:51.488554 | orchestrator | 2025-05-05 01:24:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:54.536281 | orchestrator | 2025-05-05 01:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:54.536426 | orchestrator | 2025-05-05 01:24:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:24:57.584200 | orchestrator | 2025-05-05 01:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:24:57.584347 | orchestrator | 2025-05-05 01:24:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:00.636144 | orchestrator | 2025-05-05 01:24:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:00.636294 | orchestrator | 2025-05-05 01:25:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:03.693010 | orchestrator | 2025-05-05 01:25:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:03.693197 | orchestrator | 2025-05-05 01:25:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:06.747195 | orchestrator | 2025-05-05 01:25:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:06.747334 | orchestrator | 2025-05-05 01:25:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:09.798444 | orchestrator | 2025-05-05 01:25:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:09.798587 | orchestrator | 2025-05-05 01:25:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:12.844979 | orchestrator | 2025-05-05 01:25:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:12.845168 | orchestrator | 2025-05-05 01:25:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:15.897363 | orchestrator | 2025-05-05 01:25:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:15.897506 | orchestrator | 2025-05-05 01:25:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:18.947045 | orchestrator | 2025-05-05 01:25:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:18.947229 | orchestrator | 2025-05-05 01:25:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:22.003751 | orchestrator | 2025-05-05 01:25:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:22.003902 | orchestrator | 2025-05-05 01:25:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:25.049635 | orchestrator | 2025-05-05 01:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:25.049769 | orchestrator | 2025-05-05 01:25:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:28.095998 | orchestrator | 2025-05-05 01:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:28.096184 | orchestrator | 2025-05-05 01:25:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:31.148926 | orchestrator | 2025-05-05 01:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:31.149066 | orchestrator | 2025-05-05 01:25:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:34.195796 | orchestrator | 2025-05-05 01:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:34.195943 | orchestrator | 2025-05-05 01:25:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:37.243219 | orchestrator | 2025-05-05 01:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:37.243354 | orchestrator | 2025-05-05 01:25:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:40.291840 | orchestrator | 2025-05-05 01:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:40.291986 | orchestrator | 2025-05-05 01:25:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:43.337015 | orchestrator | 2025-05-05 01:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:43.337214 | orchestrator | 2025-05-05 01:25:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:46.396049 | orchestrator | 2025-05-05 01:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:46.396246 | orchestrator | 2025-05-05 01:25:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:49.445868 | orchestrator | 2025-05-05 01:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:49.446086 | orchestrator | 2025-05-05 01:25:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:52.496909 | orchestrator | 2025-05-05 01:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:52.497082 | orchestrator | 2025-05-05 01:25:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:55.547901 | orchestrator | 2025-05-05 01:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:55.548039 | orchestrator | 2025-05-05 01:25:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:25:58.604360 | orchestrator | 2025-05-05 01:25:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:25:58.604505 | orchestrator | 2025-05-05 01:25:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:01.657339 | orchestrator | 2025-05-05 01:25:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:01.657483 | orchestrator | 2025-05-05 01:26:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:04.711002 | orchestrator | 2025-05-05 01:26:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:04.711189 | orchestrator | 2025-05-05 01:26:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:07.757469 | orchestrator | 2025-05-05 01:26:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:07.757609 | orchestrator | 2025-05-05 01:26:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:10.805311 | orchestrator | 2025-05-05 01:26:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:10.805443 | orchestrator | 2025-05-05 01:26:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:13.847567 | orchestrator | 2025-05-05 01:26:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:13.847711 | orchestrator | 2025-05-05 01:26:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:16.898844 | orchestrator | 2025-05-05 01:26:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:16.899009 | orchestrator | 2025-05-05 01:26:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:19.948413 | orchestrator | 2025-05-05 01:26:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:19.948546 | orchestrator | 2025-05-05 01:26:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:22.992218 | orchestrator | 2025-05-05 01:26:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:22.992378 | orchestrator | 2025-05-05 01:26:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:26.046493 | orchestrator | 2025-05-05 01:26:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:26.046645 | orchestrator | 2025-05-05 01:26:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:29.114306 | orchestrator | 2025-05-05 01:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:29.114451 | orchestrator | 2025-05-05 01:26:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:32.184479 | orchestrator | 2025-05-05 01:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:32.184624 | orchestrator | 2025-05-05 01:26:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:35.223813 | orchestrator | 2025-05-05 01:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:35.223958 | orchestrator | 2025-05-05 01:26:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:38.276408 | orchestrator | 2025-05-05 01:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:38.276557 | orchestrator | 2025-05-05 01:26:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:41.330964 | orchestrator | 2025-05-05 01:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:41.331101 | orchestrator | 2025-05-05 01:26:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:44.393554 | orchestrator | 2025-05-05 01:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:44.393724 | orchestrator | 2025-05-05 01:26:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:47.459623 | orchestrator | 2025-05-05 01:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:47.459764 | orchestrator | 2025-05-05 01:26:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:50.508027 | orchestrator | 2025-05-05 01:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:50.508190 | orchestrator | 2025-05-05 01:26:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:53.555690 | orchestrator | 2025-05-05 01:26:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:53.555849 | orchestrator | 2025-05-05 01:26:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:56.607160 | orchestrator | 2025-05-05 01:26:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:56.607348 | orchestrator | 2025-05-05 01:26:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:26:59.654928 | orchestrator | 2025-05-05 01:26:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:26:59.655074 | orchestrator | 2025-05-05 01:26:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:02.703127 | orchestrator | 2025-05-05 01:26:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:02.703302 | orchestrator | 2025-05-05 01:27:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:05.754207 | orchestrator | 2025-05-05 01:27:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:05.754381 | orchestrator | 2025-05-05 01:27:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:08.795798 | orchestrator | 2025-05-05 01:27:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:08.795967 | orchestrator | 2025-05-05 01:27:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:11.843544 | orchestrator | 2025-05-05 01:27:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:11.843692 | orchestrator | 2025-05-05 01:27:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:14.891528 | orchestrator | 2025-05-05 01:27:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:14.891694 | orchestrator | 2025-05-05 01:27:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:17.944480 | orchestrator | 2025-05-05 01:27:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:17.944623 | orchestrator | 2025-05-05 01:27:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:20.982808 | orchestrator | 2025-05-05 01:27:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:20.982960 | orchestrator | 2025-05-05 01:27:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:24.043818 | orchestrator | 2025-05-05 01:27:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:24.043966 | orchestrator | 2025-05-05 01:27:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:27.092484 | orchestrator | 2025-05-05 01:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:27.092656 | orchestrator | 2025-05-05 01:27:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:30.139787 | orchestrator | 2025-05-05 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:30.139929 | orchestrator | 2025-05-05 01:27:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:33.191532 | orchestrator | 2025-05-05 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:33.191666 | orchestrator | 2025-05-05 01:27:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:36.244214 | orchestrator | 2025-05-05 01:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:36.244364 | orchestrator | 2025-05-05 01:27:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:39.295009 | orchestrator | 2025-05-05 01:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:39.295152 | orchestrator | 2025-05-05 01:27:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:42.348874 | orchestrator | 2025-05-05 01:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:42.349022 | orchestrator | 2025-05-05 01:27:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:45.403642 | orchestrator | 2025-05-05 01:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:45.403786 | orchestrator | 2025-05-05 01:27:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:48.456404 | orchestrator | 2025-05-05 01:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:48.456550 | orchestrator | 2025-05-05 01:27:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:51.505599 | orchestrator | 2025-05-05 01:27:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:51.505736 | orchestrator | 2025-05-05 01:27:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:54.557525 | orchestrator | 2025-05-05 01:27:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:54.557666 | orchestrator | 2025-05-05 01:27:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:27:57.599790 | orchestrator | 2025-05-05 01:27:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:27:57.599936 | orchestrator | 2025-05-05 01:27:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:00.645645 | orchestrator | 2025-05-05 01:27:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:00.646596 | orchestrator | 2025-05-05 01:28:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:03.695798 | orchestrator | 2025-05-05 01:28:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:03.695955 | orchestrator | 2025-05-05 01:28:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:06.753603 | orchestrator | 2025-05-05 01:28:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:06.753755 | orchestrator | 2025-05-05 01:28:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:09.800463 | orchestrator | 2025-05-05 01:28:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:09.800709 | orchestrator | 2025-05-05 01:28:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:12.849161 | orchestrator | 2025-05-05 01:28:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:12.849279 | orchestrator | 2025-05-05 01:28:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:15.900911 | orchestrator | 2025-05-05 01:28:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:15.901057 | orchestrator | 2025-05-05 01:28:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:18.950272 | orchestrator | 2025-05-05 01:28:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:18.950435 | orchestrator | 2025-05-05 01:28:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:21.997001 | orchestrator | 2025-05-05 01:28:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:21.997142 | orchestrator | 2025-05-05 01:28:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:25.047107 | orchestrator | 2025-05-05 01:28:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:25.047289 | orchestrator | 2025-05-05 01:28:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:28.101735 | orchestrator | 2025-05-05 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:28.101935 | orchestrator | 2025-05-05 01:28:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:31.151194 | orchestrator | 2025-05-05 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:31.151390 | orchestrator | 2025-05-05 01:28:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:34.200782 | orchestrator | 2025-05-05 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:34.200961 | orchestrator | 2025-05-05 01:28:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:37.246013 | orchestrator | 2025-05-05 01:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:37.246206 | orchestrator | 2025-05-05 01:28:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:40.288725 | orchestrator | 2025-05-05 01:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:40.288862 | orchestrator | 2025-05-05 01:28:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:43.337985 | orchestrator | 2025-05-05 01:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:43.338300 | orchestrator | 2025-05-05 01:28:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:46.398652 | orchestrator | 2025-05-05 01:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:46.398792 | orchestrator | 2025-05-05 01:28:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:49.439409 | orchestrator | 2025-05-05 01:28:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:49.439546 | orchestrator | 2025-05-05 01:28:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:52.480571 | orchestrator | 2025-05-05 01:28:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:52.480714 | orchestrator | 2025-05-05 01:28:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:55.525056 | orchestrator | 2025-05-05 01:28:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:55.525200 | orchestrator | 2025-05-05 01:28:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:28:58.567863 | orchestrator | 2025-05-05 01:28:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:28:58.568021 | orchestrator | 2025-05-05 01:28:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:01.615451 | orchestrator | 2025-05-05 01:28:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:01.615613 | orchestrator | 2025-05-05 01:29:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:04.669142 | orchestrator | 2025-05-05 01:29:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:04.669326 | orchestrator | 2025-05-05 01:29:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:07.717480 | orchestrator | 2025-05-05 01:29:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:07.717632 | orchestrator | 2025-05-05 01:29:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:10.779209 | orchestrator | 2025-05-05 01:29:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:10.779406 | orchestrator | 2025-05-05 01:29:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:13.824567 | orchestrator | 2025-05-05 01:29:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:13.824712 | orchestrator | 2025-05-05 01:29:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:16.875873 | orchestrator | 2025-05-05 01:29:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:16.876018 | orchestrator | 2025-05-05 01:29:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:19.927341 | orchestrator | 2025-05-05 01:29:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:19.927482 | orchestrator | 2025-05-05 01:29:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:22.973772 | orchestrator | 2025-05-05 01:29:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:22.973912 | orchestrator | 2025-05-05 01:29:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:26.040592 | orchestrator | 2025-05-05 01:29:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:26.040740 | orchestrator | 2025-05-05 01:29:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:29.086531 | orchestrator | 2025-05-05 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:29.086678 | orchestrator | 2025-05-05 01:29:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:32.139063 | orchestrator | 2025-05-05 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:32.139211 | orchestrator | 2025-05-05 01:29:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:35.193813 | orchestrator | 2025-05-05 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:35.193956 | orchestrator | 2025-05-05 01:29:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:38.241407 | orchestrator | 2025-05-05 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:38.241558 | orchestrator | 2025-05-05 01:29:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:41.288203 | orchestrator | 2025-05-05 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:41.288384 | orchestrator | 2025-05-05 01:29:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:44.350661 | orchestrator | 2025-05-05 01:29:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:44.350826 | orchestrator | 2025-05-05 01:29:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:47.410857 | orchestrator | 2025-05-05 01:29:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:47.411001 | orchestrator | 2025-05-05 01:29:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:50.458883 | orchestrator | 2025-05-05 01:29:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:50.459030 | orchestrator | 2025-05-05 01:29:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:53.510106 | orchestrator | 2025-05-05 01:29:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:53.510354 | orchestrator | 2025-05-05 01:29:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:56.573694 | orchestrator | 2025-05-05 01:29:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:56.573841 | orchestrator | 2025-05-05 01:29:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:29:59.628735 | orchestrator | 2025-05-05 01:29:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:29:59.628880 | orchestrator | 2025-05-05 01:29:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:02.676489 | orchestrator | 2025-05-05 01:29:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:02.676654 | orchestrator | 2025-05-05 01:30:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:05.726553 | orchestrator | 2025-05-05 01:30:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:05.726703 | orchestrator | 2025-05-05 01:30:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:08.784735 | orchestrator | 2025-05-05 01:30:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:08.784879 | orchestrator | 2025-05-05 01:30:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:11.826363 | orchestrator | 2025-05-05 01:30:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:11.826503 | orchestrator | 2025-05-05 01:30:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:14.876564 | orchestrator | 2025-05-05 01:30:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:14.876710 | orchestrator | 2025-05-05 01:30:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:17.927884 | orchestrator | 2025-05-05 01:30:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:17.928028 | orchestrator | 2025-05-05 01:30:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:20.985899 | orchestrator | 2025-05-05 01:30:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:20.986133 | orchestrator | 2025-05-05 01:30:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:24.036252 | orchestrator | 2025-05-05 01:30:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:24.036442 | orchestrator | 2025-05-05 01:30:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:27.095057 | orchestrator | 2025-05-05 01:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:27.095198 | orchestrator | 2025-05-05 01:30:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:30.146702 | orchestrator | 2025-05-05 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:30.146845 | orchestrator | 2025-05-05 01:30:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:33.191593 | orchestrator | 2025-05-05 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:33.191738 | orchestrator | 2025-05-05 01:30:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:36.239574 | orchestrator | 2025-05-05 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:36.239704 | orchestrator | 2025-05-05 01:30:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:39.292844 | orchestrator | 2025-05-05 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:39.293028 | orchestrator | 2025-05-05 01:30:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:42.349713 | orchestrator | 2025-05-05 01:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:42.349854 | orchestrator | 2025-05-05 01:30:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:45.400675 | orchestrator | 2025-05-05 01:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:45.400809 | orchestrator | 2025-05-05 01:30:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:48.449922 | orchestrator | 2025-05-05 01:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:48.450121 | orchestrator | 2025-05-05 01:30:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:51.501170 | orchestrator | 2025-05-05 01:30:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:51.501379 | orchestrator | 2025-05-05 01:30:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:54.561176 | orchestrator | 2025-05-05 01:30:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:54.561368 | orchestrator | 2025-05-05 01:30:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:30:57.606913 | orchestrator | 2025-05-05 01:30:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:30:57.607055 | orchestrator | 2025-05-05 01:30:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:00.655444 | orchestrator | 2025-05-05 01:30:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:00.655581 | orchestrator | 2025-05-05 01:31:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:03.702450 | orchestrator | 2025-05-05 01:31:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:03.702641 | orchestrator | 2025-05-05 01:31:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:06.754636 | orchestrator | 2025-05-05 01:31:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:06.754773 | orchestrator | 2025-05-05 01:31:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:09.814494 | orchestrator | 2025-05-05 01:31:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:09.814653 | orchestrator | 2025-05-05 01:31:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:12.861159 | orchestrator | 2025-05-05 01:31:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:12.861341 | orchestrator | 2025-05-05 01:31:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:15.908855 | orchestrator | 2025-05-05 01:31:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:15.909000 | orchestrator | 2025-05-05 01:31:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:18.959022 | orchestrator | 2025-05-05 01:31:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:18.959161 | orchestrator | 2025-05-05 01:31:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:22.008693 | orchestrator | 2025-05-05 01:31:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:22.008843 | orchestrator | 2025-05-05 01:31:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:25.059860 | orchestrator | 2025-05-05 01:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:25.060000 | orchestrator | 2025-05-05 01:31:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:28.108877 | orchestrator | 2025-05-05 01:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:28.109020 | orchestrator | 2025-05-05 01:31:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:31.155619 | orchestrator | 2025-05-05 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:31.155767 | orchestrator | 2025-05-05 01:31:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:34.206841 | orchestrator | 2025-05-05 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:34.206987 | orchestrator | 2025-05-05 01:31:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:37.261805 | orchestrator | 2025-05-05 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:37.261948 | orchestrator | 2025-05-05 01:31:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:40.314822 | orchestrator | 2025-05-05 01:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:40.314937 | orchestrator | 2025-05-05 01:31:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:43.365895 | orchestrator | 2025-05-05 01:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:43.366100 | orchestrator | 2025-05-05 01:31:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:46.407920 | orchestrator | 2025-05-05 01:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:46.408085 | orchestrator | 2025-05-05 01:31:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:49.454615 | orchestrator | 2025-05-05 01:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:49.454744 | orchestrator | 2025-05-05 01:31:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:52.500562 | orchestrator | 2025-05-05 01:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:52.500729 | orchestrator | 2025-05-05 01:31:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:55.549600 | orchestrator | 2025-05-05 01:31:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:55.549742 | orchestrator | 2025-05-05 01:31:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:31:58.594475 | orchestrator | 2025-05-05 01:31:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:31:58.594614 | orchestrator | 2025-05-05 01:31:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:01.638716 | orchestrator | 2025-05-05 01:31:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:01.638884 | orchestrator | 2025-05-05 01:32:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:04.691608 | orchestrator | 2025-05-05 01:32:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:04.691767 | orchestrator | 2025-05-05 01:32:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:07.741793 | orchestrator | 2025-05-05 01:32:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:07.741933 | orchestrator | 2025-05-05 01:32:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:10.794437 | orchestrator | 2025-05-05 01:32:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:10.794585 | orchestrator | 2025-05-05 01:32:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:13.843931 | orchestrator | 2025-05-05 01:32:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:13.844077 | orchestrator | 2025-05-05 01:32:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:16.891648 | orchestrator | 2025-05-05 01:32:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:16.891773 | orchestrator | 2025-05-05 01:32:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:19.941149 | orchestrator | 2025-05-05 01:32:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:19.941285 | orchestrator | 2025-05-05 01:32:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:22.982552 | orchestrator | 2025-05-05 01:32:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:22.982703 | orchestrator | 2025-05-05 01:32:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:26.038075 | orchestrator | 2025-05-05 01:32:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:26.038233 | orchestrator | 2025-05-05 01:32:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:29.086235 | orchestrator | 2025-05-05 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:29.086391 | orchestrator | 2025-05-05 01:32:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:32.142115 | orchestrator | 2025-05-05 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:32.142499 | orchestrator | 2025-05-05 01:32:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:35.194423 | orchestrator | 2025-05-05 01:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:35.194573 | orchestrator | 2025-05-05 01:32:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:38.238943 | orchestrator | 2025-05-05 01:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:38.239093 | orchestrator | 2025-05-05 01:32:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:41.287849 | orchestrator | 2025-05-05 01:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:41.288001 | orchestrator | 2025-05-05 01:32:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:44.344386 | orchestrator | 2025-05-05 01:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:44.344531 | orchestrator | 2025-05-05 01:32:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:47.389012 | orchestrator | 2025-05-05 01:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:47.389163 | orchestrator | 2025-05-05 01:32:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:50.434709 | orchestrator | 2025-05-05 01:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:50.434839 | orchestrator | 2025-05-05 01:32:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:53.482258 | orchestrator | 2025-05-05 01:32:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:53.482479 | orchestrator | 2025-05-05 01:32:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:56.533384 | orchestrator | 2025-05-05 01:32:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:56.533538 | orchestrator | 2025-05-05 01:32:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:32:59.580283 | orchestrator | 2025-05-05 01:32:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:32:59.580478 | orchestrator | 2025-05-05 01:32:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:02.630125 | orchestrator | 2025-05-05 01:32:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:02.630263 | orchestrator | 2025-05-05 01:33:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:02.632321 | orchestrator | 2025-05-05 01:33:02 | INFO  | Task 340d1059-5dcd-4031-b15d-e783724a2133 is in state STARTED 2025-05-05 01:33:02.632516 | orchestrator | 2025-05-05 01:33:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:05.694306 | orchestrator | 2025-05-05 01:33:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:05.694900 | orchestrator | 2025-05-05 01:33:05 | INFO  | Task 340d1059-5dcd-4031-b15d-e783724a2133 is in state STARTED 2025-05-05 01:33:08.751189 | orchestrator | 2025-05-05 01:33:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:08.751365 | orchestrator | 2025-05-05 01:33:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:08.752283 | orchestrator | 2025-05-05 01:33:08 | INFO  | Task 340d1059-5dcd-4031-b15d-e783724a2133 is in state STARTED 2025-05-05 01:33:11.807426 | orchestrator | 2025-05-05 01:33:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:11.807572 | orchestrator | 2025-05-05 01:33:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:11.810295 | orchestrator | 2025-05-05 01:33:11 | INFO  | Task 340d1059-5dcd-4031-b15d-e783724a2133 is in state STARTED 2025-05-05 01:33:14.862102 | orchestrator | 2025-05-05 01:33:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:14.862248 | orchestrator | 2025-05-05 01:33:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:14.862596 | orchestrator | 2025-05-05 01:33:14 | INFO  | Task 340d1059-5dcd-4031-b15d-e783724a2133 is in state SUCCESS 2025-05-05 01:33:17.913306 | orchestrator | 2025-05-05 01:33:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:17.913505 | orchestrator | 2025-05-05 01:33:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:20.960049 | orchestrator | 2025-05-05 01:33:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:20.960196 | orchestrator | 2025-05-05 01:33:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:24.007415 | orchestrator | 2025-05-05 01:33:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:24.007623 | orchestrator | 2025-05-05 01:33:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:27.058268 | orchestrator | 2025-05-05 01:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:27.058489 | orchestrator | 2025-05-05 01:33:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:30.110476 | orchestrator | 2025-05-05 01:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:30.110650 | orchestrator | 2025-05-05 01:33:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:33.160275 | orchestrator | 2025-05-05 01:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:33.160479 | orchestrator | 2025-05-05 01:33:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:36.216679 | orchestrator | 2025-05-05 01:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:36.216860 | orchestrator | 2025-05-05 01:33:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:39.265199 | orchestrator | 2025-05-05 01:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:39.265401 | orchestrator | 2025-05-05 01:33:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:42.314564 | orchestrator | 2025-05-05 01:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:42.314747 | orchestrator | 2025-05-05 01:33:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:45.368797 | orchestrator | 2025-05-05 01:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:45.368938 | orchestrator | 2025-05-05 01:33:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:48.414271 | orchestrator | 2025-05-05 01:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:48.414471 | orchestrator | 2025-05-05 01:33:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:51.455828 | orchestrator | 2025-05-05 01:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:51.455951 | orchestrator | 2025-05-05 01:33:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:54.498633 | orchestrator | 2025-05-05 01:33:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:54.498782 | orchestrator | 2025-05-05 01:33:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:33:57.537432 | orchestrator | 2025-05-05 01:33:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:33:57.537575 | orchestrator | 2025-05-05 01:33:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:00.585392 | orchestrator | 2025-05-05 01:33:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:00.585548 | orchestrator | 2025-05-05 01:34:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:03.632994 | orchestrator | 2025-05-05 01:34:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:03.633171 | orchestrator | 2025-05-05 01:34:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:06.688603 | orchestrator | 2025-05-05 01:34:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:06.688749 | orchestrator | 2025-05-05 01:34:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:09.736897 | orchestrator | 2025-05-05 01:34:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:09.737038 | orchestrator | 2025-05-05 01:34:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:12.789321 | orchestrator | 2025-05-05 01:34:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:12.789519 | orchestrator | 2025-05-05 01:34:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:15.832893 | orchestrator | 2025-05-05 01:34:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:15.833034 | orchestrator | 2025-05-05 01:34:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:18.875701 | orchestrator | 2025-05-05 01:34:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:18.875852 | orchestrator | 2025-05-05 01:34:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:21.929206 | orchestrator | 2025-05-05 01:34:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:21.929405 | orchestrator | 2025-05-05 01:34:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:24.978171 | orchestrator | 2025-05-05 01:34:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:24.978325 | orchestrator | 2025-05-05 01:34:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:28.020510 | orchestrator | 2025-05-05 01:34:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:28.020648 | orchestrator | 2025-05-05 01:34:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:31.071540 | orchestrator | 2025-05-05 01:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:31.071684 | orchestrator | 2025-05-05 01:34:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:34.118792 | orchestrator | 2025-05-05 01:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:34.118952 | orchestrator | 2025-05-05 01:34:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:37.169806 | orchestrator | 2025-05-05 01:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:37.169902 | orchestrator | 2025-05-05 01:34:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:40.221259 | orchestrator | 2025-05-05 01:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:40.221468 | orchestrator | 2025-05-05 01:34:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:43.269343 | orchestrator | 2025-05-05 01:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:43.269569 | orchestrator | 2025-05-05 01:34:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:46.317785 | orchestrator | 2025-05-05 01:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:46.317926 | orchestrator | 2025-05-05 01:34:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:49.361086 | orchestrator | 2025-05-05 01:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:49.361232 | orchestrator | 2025-05-05 01:34:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:52.403008 | orchestrator | 2025-05-05 01:34:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:52.403163 | orchestrator | 2025-05-05 01:34:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:55.447677 | orchestrator | 2025-05-05 01:34:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:55.447816 | orchestrator | 2025-05-05 01:34:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:34:58.498530 | orchestrator | 2025-05-05 01:34:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:34:58.498678 | orchestrator | 2025-05-05 01:34:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:01.549174 | orchestrator | 2025-05-05 01:34:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:01.549316 | orchestrator | 2025-05-05 01:35:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:04.594839 | orchestrator | 2025-05-05 01:35:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:04.594988 | orchestrator | 2025-05-05 01:35:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:07.647476 | orchestrator | 2025-05-05 01:35:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:07.647631 | orchestrator | 2025-05-05 01:35:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:10.698339 | orchestrator | 2025-05-05 01:35:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:10.698518 | orchestrator | 2025-05-05 01:35:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:13.751465 | orchestrator | 2025-05-05 01:35:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:13.751618 | orchestrator | 2025-05-05 01:35:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:16.804105 | orchestrator | 2025-05-05 01:35:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:16.804256 | orchestrator | 2025-05-05 01:35:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:19.853636 | orchestrator | 2025-05-05 01:35:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:19.853793 | orchestrator | 2025-05-05 01:35:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:22.902947 | orchestrator | 2025-05-05 01:35:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:22.903090 | orchestrator | 2025-05-05 01:35:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:25.955798 | orchestrator | 2025-05-05 01:35:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:25.955955 | orchestrator | 2025-05-05 01:35:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:28.999417 | orchestrator | 2025-05-05 01:35:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:28.999557 | orchestrator | 2025-05-05 01:35:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:32.052960 | orchestrator | 2025-05-05 01:35:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:32.053100 | orchestrator | 2025-05-05 01:35:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:35.110296 | orchestrator | 2025-05-05 01:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:35.110506 | orchestrator | 2025-05-05 01:35:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:38.157216 | orchestrator | 2025-05-05 01:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:38.157367 | orchestrator | 2025-05-05 01:35:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:41.203909 | orchestrator | 2025-05-05 01:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:41.204045 | orchestrator | 2025-05-05 01:35:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:44.259978 | orchestrator | 2025-05-05 01:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:44.260130 | orchestrator | 2025-05-05 01:35:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:47.314758 | orchestrator | 2025-05-05 01:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:47.314902 | orchestrator | 2025-05-05 01:35:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:50.362625 | orchestrator | 2025-05-05 01:35:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:50.362774 | orchestrator | 2025-05-05 01:35:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:53.414745 | orchestrator | 2025-05-05 01:35:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:53.414910 | orchestrator | 2025-05-05 01:35:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:56.463456 | orchestrator | 2025-05-05 01:35:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:56.463613 | orchestrator | 2025-05-05 01:35:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:35:59.505182 | orchestrator | 2025-05-05 01:35:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:35:59.505345 | orchestrator | 2025-05-05 01:35:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:02.547816 | orchestrator | 2025-05-05 01:35:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:02.547966 | orchestrator | 2025-05-05 01:36:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:05.596106 | orchestrator | 2025-05-05 01:36:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:05.596265 | orchestrator | 2025-05-05 01:36:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:08.650640 | orchestrator | 2025-05-05 01:36:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:08.650799 | orchestrator | 2025-05-05 01:36:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:11.692290 | orchestrator | 2025-05-05 01:36:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:11.692452 | orchestrator | 2025-05-05 01:36:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:14.746250 | orchestrator | 2025-05-05 01:36:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:14.746467 | orchestrator | 2025-05-05 01:36:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:17.804764 | orchestrator | 2025-05-05 01:36:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:17.804914 | orchestrator | 2025-05-05 01:36:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:20.851938 | orchestrator | 2025-05-05 01:36:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:20.852088 | orchestrator | 2025-05-05 01:36:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:23.904945 | orchestrator | 2025-05-05 01:36:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:23.905084 | orchestrator | 2025-05-05 01:36:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:26.964092 | orchestrator | 2025-05-05 01:36:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:26.964293 | orchestrator | 2025-05-05 01:36:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:26.965114 | orchestrator | 2025-05-05 01:36:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:30.019531 | orchestrator | 2025-05-05 01:36:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:33.072920 | orchestrator | 2025-05-05 01:36:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:33.073093 | orchestrator | 2025-05-05 01:36:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:36.111253 | orchestrator | 2025-05-05 01:36:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:36.111388 | orchestrator | 2025-05-05 01:36:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:39.148145 | orchestrator | 2025-05-05 01:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:39.148290 | orchestrator | 2025-05-05 01:36:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:42.194790 | orchestrator | 2025-05-05 01:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:42.194929 | orchestrator | 2025-05-05 01:36:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:45.238433 | orchestrator | 2025-05-05 01:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:45.238605 | orchestrator | 2025-05-05 01:36:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:48.290310 | orchestrator | 2025-05-05 01:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:48.290525 | orchestrator | 2025-05-05 01:36:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:51.336711 | orchestrator | 2025-05-05 01:36:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:51.336863 | orchestrator | 2025-05-05 01:36:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:54.385864 | orchestrator | 2025-05-05 01:36:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:54.386011 | orchestrator | 2025-05-05 01:36:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:36:57.424560 | orchestrator | 2025-05-05 01:36:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:36:57.424707 | orchestrator | 2025-05-05 01:36:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:00.471867 | orchestrator | 2025-05-05 01:36:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:00.472007 | orchestrator | 2025-05-05 01:37:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:03.519091 | orchestrator | 2025-05-05 01:37:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:03.519251 | orchestrator | 2025-05-05 01:37:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:06.564344 | orchestrator | 2025-05-05 01:37:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:06.564593 | orchestrator | 2025-05-05 01:37:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:09.611396 | orchestrator | 2025-05-05 01:37:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:09.612302 | orchestrator | 2025-05-05 01:37:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:12.653655 | orchestrator | 2025-05-05 01:37:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:12.653795 | orchestrator | 2025-05-05 01:37:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:15.701889 | orchestrator | 2025-05-05 01:37:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:15.702087 | orchestrator | 2025-05-05 01:37:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:18.758347 | orchestrator | 2025-05-05 01:37:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:18.758534 | orchestrator | 2025-05-05 01:37:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:21.803123 | orchestrator | 2025-05-05 01:37:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:21.803282 | orchestrator | 2025-05-05 01:37:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:24.852701 | orchestrator | 2025-05-05 01:37:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:24.852849 | orchestrator | 2025-05-05 01:37:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:27.899414 | orchestrator | 2025-05-05 01:37:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:27.899658 | orchestrator | 2025-05-05 01:37:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:30.948960 | orchestrator | 2025-05-05 01:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:30.949133 | orchestrator | 2025-05-05 01:37:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:34.003550 | orchestrator | 2025-05-05 01:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:34.003701 | orchestrator | 2025-05-05 01:37:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:37.054718 | orchestrator | 2025-05-05 01:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:37.054868 | orchestrator | 2025-05-05 01:37:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:40.096294 | orchestrator | 2025-05-05 01:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:40.096493 | orchestrator | 2025-05-05 01:37:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:43.141830 | orchestrator | 2025-05-05 01:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:43.141961 | orchestrator | 2025-05-05 01:37:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:46.190761 | orchestrator | 2025-05-05 01:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:46.190924 | orchestrator | 2025-05-05 01:37:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:49.226196 | orchestrator | 2025-05-05 01:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:49.226341 | orchestrator | 2025-05-05 01:37:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:52.263842 | orchestrator | 2025-05-05 01:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:52.263982 | orchestrator | 2025-05-05 01:37:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:55.309649 | orchestrator | 2025-05-05 01:37:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:55.309794 | orchestrator | 2025-05-05 01:37:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:37:58.358628 | orchestrator | 2025-05-05 01:37:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:37:58.358767 | orchestrator | 2025-05-05 01:37:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:01.405375 | orchestrator | 2025-05-05 01:37:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:01.405592 | orchestrator | 2025-05-05 01:38:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:04.456766 | orchestrator | 2025-05-05 01:38:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:04.456917 | orchestrator | 2025-05-05 01:38:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:07.504642 | orchestrator | 2025-05-05 01:38:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:07.504793 | orchestrator | 2025-05-05 01:38:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:10.554712 | orchestrator | 2025-05-05 01:38:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:10.554852 | orchestrator | 2025-05-05 01:38:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:13.605648 | orchestrator | 2025-05-05 01:38:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:13.606702 | orchestrator | 2025-05-05 01:38:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:16.646668 | orchestrator | 2025-05-05 01:38:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:16.646889 | orchestrator | 2025-05-05 01:38:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:19.702014 | orchestrator | 2025-05-05 01:38:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:19.702216 | orchestrator | 2025-05-05 01:38:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:22.746924 | orchestrator | 2025-05-05 01:38:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:22.747094 | orchestrator | 2025-05-05 01:38:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:25.800578 | orchestrator | 2025-05-05 01:38:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:25.800751 | orchestrator | 2025-05-05 01:38:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:28.840438 | orchestrator | 2025-05-05 01:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:28.840684 | orchestrator | 2025-05-05 01:38:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:31.892095 | orchestrator | 2025-05-05 01:38:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:31.892261 | orchestrator | 2025-05-05 01:38:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:34.940943 | orchestrator | 2025-05-05 01:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:34.941098 | orchestrator | 2025-05-05 01:38:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:37.989179 | orchestrator | 2025-05-05 01:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:37.989330 | orchestrator | 2025-05-05 01:38:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:41.036277 | orchestrator | 2025-05-05 01:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:41.036424 | orchestrator | 2025-05-05 01:38:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:44.085058 | orchestrator | 2025-05-05 01:38:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:44.085213 | orchestrator | 2025-05-05 01:38:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:47.133432 | orchestrator | 2025-05-05 01:38:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:47.133662 | orchestrator | 2025-05-05 01:38:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:50.179699 | orchestrator | 2025-05-05 01:38:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:50.179849 | orchestrator | 2025-05-05 01:38:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:53.225864 | orchestrator | 2025-05-05 01:38:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:53.226006 | orchestrator | 2025-05-05 01:38:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:56.269325 | orchestrator | 2025-05-05 01:38:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:56.269464 | orchestrator | 2025-05-05 01:38:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:38:59.309685 | orchestrator | 2025-05-05 01:38:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:38:59.309827 | orchestrator | 2025-05-05 01:38:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:02.354992 | orchestrator | 2025-05-05 01:38:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:02.355173 | orchestrator | 2025-05-05 01:39:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:05.397214 | orchestrator | 2025-05-05 01:39:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:05.397333 | orchestrator | 2025-05-05 01:39:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:08.445819 | orchestrator | 2025-05-05 01:39:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:08.445937 | orchestrator | 2025-05-05 01:39:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:11.497731 | orchestrator | 2025-05-05 01:39:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:11.497884 | orchestrator | 2025-05-05 01:39:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:14.546852 | orchestrator | 2025-05-05 01:39:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:14.546997 | orchestrator | 2025-05-05 01:39:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:17.589957 | orchestrator | 2025-05-05 01:39:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:17.590095 | orchestrator | 2025-05-05 01:39:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:20.631942 | orchestrator | 2025-05-05 01:39:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:20.632893 | orchestrator | 2025-05-05 01:39:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:23.681256 | orchestrator | 2025-05-05 01:39:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:23.681396 | orchestrator | 2025-05-05 01:39:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:26.730979 | orchestrator | 2025-05-05 01:39:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:26.731126 | orchestrator | 2025-05-05 01:39:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:29.779491 | orchestrator | 2025-05-05 01:39:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:29.779665 | orchestrator | 2025-05-05 01:39:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:32.825275 | orchestrator | 2025-05-05 01:39:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:32.825422 | orchestrator | 2025-05-05 01:39:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:35.871374 | orchestrator | 2025-05-05 01:39:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:35.871546 | orchestrator | 2025-05-05 01:39:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:38.922302 | orchestrator | 2025-05-05 01:39:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:38.922478 | orchestrator | 2025-05-05 01:39:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:41.975057 | orchestrator | 2025-05-05 01:39:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:41.975201 | orchestrator | 2025-05-05 01:39:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:45.023929 | orchestrator | 2025-05-05 01:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:45.024105 | orchestrator | 2025-05-05 01:39:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:48.076635 | orchestrator | 2025-05-05 01:39:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:48.076805 | orchestrator | 2025-05-05 01:39:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:51.116684 | orchestrator | 2025-05-05 01:39:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:51.116853 | orchestrator | 2025-05-05 01:39:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:54.166973 | orchestrator | 2025-05-05 01:39:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:54.167069 | orchestrator | 2025-05-05 01:39:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:39:57.222341 | orchestrator | 2025-05-05 01:39:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:39:57.222482 | orchestrator | 2025-05-05 01:39:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:00.273792 | orchestrator | 2025-05-05 01:39:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:00.273935 | orchestrator | 2025-05-05 01:40:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:03.316399 | orchestrator | 2025-05-05 01:40:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:03.316598 | orchestrator | 2025-05-05 01:40:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:06.367505 | orchestrator | 2025-05-05 01:40:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:06.367672 | orchestrator | 2025-05-05 01:40:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:09.414705 | orchestrator | 2025-05-05 01:40:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:09.414851 | orchestrator | 2025-05-05 01:40:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:12.466301 | orchestrator | 2025-05-05 01:40:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:12.466448 | orchestrator | 2025-05-05 01:40:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:15.510923 | orchestrator | 2025-05-05 01:40:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:15.511069 | orchestrator | 2025-05-05 01:40:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:18.551995 | orchestrator | 2025-05-05 01:40:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:18.552142 | orchestrator | 2025-05-05 01:40:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:21.607354 | orchestrator | 2025-05-05 01:40:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:21.608259 | orchestrator | 2025-05-05 01:40:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:24.656587 | orchestrator | 2025-05-05 01:40:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:24.656733 | orchestrator | 2025-05-05 01:40:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:27.711031 | orchestrator | 2025-05-05 01:40:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:27.711174 | orchestrator | 2025-05-05 01:40:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:30.753134 | orchestrator | 2025-05-05 01:40:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:30.753271 | orchestrator | 2025-05-05 01:40:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:33.807651 | orchestrator | 2025-05-05 01:40:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:33.807866 | orchestrator | 2025-05-05 01:40:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:36.862136 | orchestrator | 2025-05-05 01:40:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:36.862302 | orchestrator | 2025-05-05 01:40:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:39.910497 | orchestrator | 2025-05-05 01:40:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:39.910699 | orchestrator | 2025-05-05 01:40:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:42.961247 | orchestrator | 2025-05-05 01:40:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:42.961393 | orchestrator | 2025-05-05 01:40:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:46.007534 | orchestrator | 2025-05-05 01:40:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:46.007731 | orchestrator | 2025-05-05 01:40:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:49.055951 | orchestrator | 2025-05-05 01:40:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:49.056089 | orchestrator | 2025-05-05 01:40:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:52.100500 | orchestrator | 2025-05-05 01:40:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:52.100694 | orchestrator | 2025-05-05 01:40:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:55.148143 | orchestrator | 2025-05-05 01:40:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:55.148303 | orchestrator | 2025-05-05 01:40:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:40:58.197282 | orchestrator | 2025-05-05 01:40:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:40:58.197427 | orchestrator | 2025-05-05 01:40:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:01.241891 | orchestrator | 2025-05-05 01:40:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:01.242086 | orchestrator | 2025-05-05 01:41:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:04.286326 | orchestrator | 2025-05-05 01:41:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:04.286477 | orchestrator | 2025-05-05 01:41:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:07.332750 | orchestrator | 2025-05-05 01:41:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:07.332889 | orchestrator | 2025-05-05 01:41:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:10.383688 | orchestrator | 2025-05-05 01:41:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:10.383823 | orchestrator | 2025-05-05 01:41:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:13.432168 | orchestrator | 2025-05-05 01:41:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:13.432311 | orchestrator | 2025-05-05 01:41:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:16.472420 | orchestrator | 2025-05-05 01:41:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:16.472610 | orchestrator | 2025-05-05 01:41:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:19.516891 | orchestrator | 2025-05-05 01:41:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:19.517068 | orchestrator | 2025-05-05 01:41:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:22.565107 | orchestrator | 2025-05-05 01:41:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:22.565251 | orchestrator | 2025-05-05 01:41:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:25.612764 | orchestrator | 2025-05-05 01:41:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:25.612914 | orchestrator | 2025-05-05 01:41:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:28.660923 | orchestrator | 2025-05-05 01:41:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:28.661085 | orchestrator | 2025-05-05 01:41:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:31.706559 | orchestrator | 2025-05-05 01:41:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:31.706793 | orchestrator | 2025-05-05 01:41:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:34.753397 | orchestrator | 2025-05-05 01:41:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:34.753554 | orchestrator | 2025-05-05 01:41:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:37.798221 | orchestrator | 2025-05-05 01:41:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:37.798360 | orchestrator | 2025-05-05 01:41:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:40.852996 | orchestrator | 2025-05-05 01:41:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:40.853144 | orchestrator | 2025-05-05 01:41:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:43.905495 | orchestrator | 2025-05-05 01:41:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:43.905711 | orchestrator | 2025-05-05 01:41:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:46.953891 | orchestrator | 2025-05-05 01:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:46.954086 | orchestrator | 2025-05-05 01:41:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:50.011224 | orchestrator | 2025-05-05 01:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:50.011373 | orchestrator | 2025-05-05 01:41:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:53.062987 | orchestrator | 2025-05-05 01:41:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:53.063135 | orchestrator | 2025-05-05 01:41:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:56.106159 | orchestrator | 2025-05-05 01:41:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:56.106302 | orchestrator | 2025-05-05 01:41:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:41:59.158225 | orchestrator | 2025-05-05 01:41:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:41:59.158373 | orchestrator | 2025-05-05 01:41:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:02.206508 | orchestrator | 2025-05-05 01:41:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:02.206705 | orchestrator | 2025-05-05 01:42:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:05.256986 | orchestrator | 2025-05-05 01:42:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:05.257157 | orchestrator | 2025-05-05 01:42:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:08.308407 | orchestrator | 2025-05-05 01:42:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:08.308695 | orchestrator | 2025-05-05 01:42:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:11.359962 | orchestrator | 2025-05-05 01:42:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:11.360105 | orchestrator | 2025-05-05 01:42:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:14.408776 | orchestrator | 2025-05-05 01:42:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:14.408915 | orchestrator | 2025-05-05 01:42:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:17.451701 | orchestrator | 2025-05-05 01:42:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:17.451862 | orchestrator | 2025-05-05 01:42:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:20.498972 | orchestrator | 2025-05-05 01:42:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:20.499114 | orchestrator | 2025-05-05 01:42:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:23.544253 | orchestrator | 2025-05-05 01:42:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:23.544395 | orchestrator | 2025-05-05 01:42:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:26.599805 | orchestrator | 2025-05-05 01:42:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:26.599955 | orchestrator | 2025-05-05 01:42:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:29.644243 | orchestrator | 2025-05-05 01:42:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:29.644385 | orchestrator | 2025-05-05 01:42:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:32.690951 | orchestrator | 2025-05-05 01:42:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:32.691108 | orchestrator | 2025-05-05 01:42:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:35.741881 | orchestrator | 2025-05-05 01:42:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:35.742096 | orchestrator | 2025-05-05 01:42:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:38.789430 | orchestrator | 2025-05-05 01:42:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:38.789562 | orchestrator | 2025-05-05 01:42:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:41.837914 | orchestrator | 2025-05-05 01:42:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:41.838175 | orchestrator | 2025-05-05 01:42:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:44.889802 | orchestrator | 2025-05-05 01:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:44.889939 | orchestrator | 2025-05-05 01:42:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:47.933946 | orchestrator | 2025-05-05 01:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:47.934158 | orchestrator | 2025-05-05 01:42:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:50.978603 | orchestrator | 2025-05-05 01:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:50.978829 | orchestrator | 2025-05-05 01:42:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:54.035101 | orchestrator | 2025-05-05 01:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:54.035250 | orchestrator | 2025-05-05 01:42:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:42:57.089798 | orchestrator | 2025-05-05 01:42:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:42:57.089939 | orchestrator | 2025-05-05 01:42:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:00.138449 | orchestrator | 2025-05-05 01:42:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:00.138594 | orchestrator | 2025-05-05 01:43:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:03.192262 | orchestrator | 2025-05-05 01:43:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:03.192426 | orchestrator | 2025-05-05 01:43:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:03.193567 | orchestrator | 2025-05-05 01:43:03 | INFO  | Task 1d89404c-b52f-43e0-bc29-2eb578184cfa is in state STARTED 2025-05-05 01:43:06.245585 | orchestrator | 2025-05-05 01:43:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:06.245720 | orchestrator | 2025-05-05 01:43:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:06.246877 | orchestrator | 2025-05-05 01:43:06 | INFO  | Task 1d89404c-b52f-43e0-bc29-2eb578184cfa is in state STARTED 2025-05-05 01:43:09.299464 | orchestrator | 2025-05-05 01:43:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:09.299603 | orchestrator | 2025-05-05 01:43:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:09.300131 | orchestrator | 2025-05-05 01:43:09 | INFO  | Task 1d89404c-b52f-43e0-bc29-2eb578184cfa is in state STARTED 2025-05-05 01:43:09.300421 | orchestrator | 2025-05-05 01:43:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:12.351163 | orchestrator | 2025-05-05 01:43:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:12.352068 | orchestrator | 2025-05-05 01:43:12 | INFO  | Task 1d89404c-b52f-43e0-bc29-2eb578184cfa is in state SUCCESS 2025-05-05 01:43:15.402003 | orchestrator | 2025-05-05 01:43:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:15.402199 | orchestrator | 2025-05-05 01:43:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:18.452135 | orchestrator | 2025-05-05 01:43:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:18.452290 | orchestrator | 2025-05-05 01:43:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:21.498431 | orchestrator | 2025-05-05 01:43:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:21.498571 | orchestrator | 2025-05-05 01:43:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:24.545347 | orchestrator | 2025-05-05 01:43:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:24.545493 | orchestrator | 2025-05-05 01:43:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:27.596451 | orchestrator | 2025-05-05 01:43:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:27.596616 | orchestrator | 2025-05-05 01:43:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:30.644926 | orchestrator | 2025-05-05 01:43:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:30.645169 | orchestrator | 2025-05-05 01:43:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:33.695917 | orchestrator | 2025-05-05 01:43:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:33.696077 | orchestrator | 2025-05-05 01:43:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:36.743211 | orchestrator | 2025-05-05 01:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:36.743359 | orchestrator | 2025-05-05 01:43:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:39.785209 | orchestrator | 2025-05-05 01:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:39.785350 | orchestrator | 2025-05-05 01:43:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:42.835974 | orchestrator | 2025-05-05 01:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:42.836126 | orchestrator | 2025-05-05 01:43:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:45.880519 | orchestrator | 2025-05-05 01:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:45.880692 | orchestrator | 2025-05-05 01:43:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:48.931066 | orchestrator | 2025-05-05 01:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:48.931214 | orchestrator | 2025-05-05 01:43:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:51.978216 | orchestrator | 2025-05-05 01:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:51.978358 | orchestrator | 2025-05-05 01:43:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:55.029777 | orchestrator | 2025-05-05 01:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:55.029923 | orchestrator | 2025-05-05 01:43:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:43:55.030192 | orchestrator | 2025-05-05 01:43:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:43:58.074478 | orchestrator | 2025-05-05 01:43:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:01.115524 | orchestrator | 2025-05-05 01:43:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:01.115695 | orchestrator | 2025-05-05 01:44:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:04.159509 | orchestrator | 2025-05-05 01:44:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:04.159649 | orchestrator | 2025-05-05 01:44:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:07.201089 | orchestrator | 2025-05-05 01:44:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:07.201256 | orchestrator | 2025-05-05 01:44:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:10.245281 | orchestrator | 2025-05-05 01:44:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:10.245420 | orchestrator | 2025-05-05 01:44:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:13.294511 | orchestrator | 2025-05-05 01:44:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:13.294724 | orchestrator | 2025-05-05 01:44:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:16.344500 | orchestrator | 2025-05-05 01:44:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:16.344645 | orchestrator | 2025-05-05 01:44:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:19.393072 | orchestrator | 2025-05-05 01:44:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:19.393225 | orchestrator | 2025-05-05 01:44:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:22.441160 | orchestrator | 2025-05-05 01:44:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:22.441298 | orchestrator | 2025-05-05 01:44:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:25.489146 | orchestrator | 2025-05-05 01:44:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:25.489318 | orchestrator | 2025-05-05 01:44:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:28.536639 | orchestrator | 2025-05-05 01:44:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:28.536820 | orchestrator | 2025-05-05 01:44:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:31.586207 | orchestrator | 2025-05-05 01:44:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:31.586346 | orchestrator | 2025-05-05 01:44:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:34.635815 | orchestrator | 2025-05-05 01:44:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:34.635962 | orchestrator | 2025-05-05 01:44:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:37.681193 | orchestrator | 2025-05-05 01:44:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:37.681332 | orchestrator | 2025-05-05 01:44:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:40.721985 | orchestrator | 2025-05-05 01:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:40.722186 | orchestrator | 2025-05-05 01:44:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:43.765347 | orchestrator | 2025-05-05 01:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:43.765475 | orchestrator | 2025-05-05 01:44:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:46.811178 | orchestrator | 2025-05-05 01:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:46.811312 | orchestrator | 2025-05-05 01:44:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:49.857252 | orchestrator | 2025-05-05 01:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:49.857462 | orchestrator | 2025-05-05 01:44:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:52.904219 | orchestrator | 2025-05-05 01:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:52.904366 | orchestrator | 2025-05-05 01:44:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:55.947989 | orchestrator | 2025-05-05 01:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:55.948161 | orchestrator | 2025-05-05 01:44:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:44:58.997493 | orchestrator | 2025-05-05 01:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:44:58.997640 | orchestrator | 2025-05-05 01:44:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:02.045085 | orchestrator | 2025-05-05 01:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:02.045226 | orchestrator | 2025-05-05 01:45:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:05.088613 | orchestrator | 2025-05-05 01:45:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:05.088870 | orchestrator | 2025-05-05 01:45:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:08.141925 | orchestrator | 2025-05-05 01:45:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:08.142167 | orchestrator | 2025-05-05 01:45:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:11.182835 | orchestrator | 2025-05-05 01:45:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:11.182952 | orchestrator | 2025-05-05 01:45:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:14.232265 | orchestrator | 2025-05-05 01:45:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:14.232411 | orchestrator | 2025-05-05 01:45:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:17.277520 | orchestrator | 2025-05-05 01:45:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:17.277662 | orchestrator | 2025-05-05 01:45:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:20.329097 | orchestrator | 2025-05-05 01:45:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:20.329274 | orchestrator | 2025-05-05 01:45:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:23.377185 | orchestrator | 2025-05-05 01:45:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:23.377334 | orchestrator | 2025-05-05 01:45:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:26.426643 | orchestrator | 2025-05-05 01:45:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:26.426836 | orchestrator | 2025-05-05 01:45:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:29.469125 | orchestrator | 2025-05-05 01:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:29.469287 | orchestrator | 2025-05-05 01:45:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:32.507118 | orchestrator | 2025-05-05 01:45:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:32.507258 | orchestrator | 2025-05-05 01:45:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:35.561818 | orchestrator | 2025-05-05 01:45:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:35.561966 | orchestrator | 2025-05-05 01:45:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:38.609890 | orchestrator | 2025-05-05 01:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:38.610080 | orchestrator | 2025-05-05 01:45:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:41.649689 | orchestrator | 2025-05-05 01:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:41.649876 | orchestrator | 2025-05-05 01:45:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:44.699154 | orchestrator | 2025-05-05 01:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:44.699294 | orchestrator | 2025-05-05 01:45:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:47.755822 | orchestrator | 2025-05-05 01:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:47.755980 | orchestrator | 2025-05-05 01:45:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:50.811891 | orchestrator | 2025-05-05 01:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:50.812060 | orchestrator | 2025-05-05 01:45:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:53.860997 | orchestrator | 2025-05-05 01:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:53.861162 | orchestrator | 2025-05-05 01:45:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:56.915423 | orchestrator | 2025-05-05 01:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:56.915581 | orchestrator | 2025-05-05 01:45:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:45:59.964842 | orchestrator | 2025-05-05 01:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:45:59.964995 | orchestrator | 2025-05-05 01:45:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:03.008017 | orchestrator | 2025-05-05 01:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:03.008160 | orchestrator | 2025-05-05 01:46:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:06.066254 | orchestrator | 2025-05-05 01:46:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:06.066396 | orchestrator | 2025-05-05 01:46:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:09.116724 | orchestrator | 2025-05-05 01:46:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:09.116928 | orchestrator | 2025-05-05 01:46:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:12.162683 | orchestrator | 2025-05-05 01:46:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:12.162877 | orchestrator | 2025-05-05 01:46:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:15.213229 | orchestrator | 2025-05-05 01:46:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:15.213371 | orchestrator | 2025-05-05 01:46:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:18.264432 | orchestrator | 2025-05-05 01:46:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:18.264574 | orchestrator | 2025-05-05 01:46:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:21.313943 | orchestrator | 2025-05-05 01:46:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:21.314154 | orchestrator | 2025-05-05 01:46:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:24.364218 | orchestrator | 2025-05-05 01:46:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:24.364363 | orchestrator | 2025-05-05 01:46:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:27.414501 | orchestrator | 2025-05-05 01:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:27.414683 | orchestrator | 2025-05-05 01:46:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:30.467740 | orchestrator | 2025-05-05 01:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:30.467948 | orchestrator | 2025-05-05 01:46:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:33.512333 | orchestrator | 2025-05-05 01:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:33.512481 | orchestrator | 2025-05-05 01:46:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:36.563651 | orchestrator | 2025-05-05 01:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:36.563854 | orchestrator | 2025-05-05 01:46:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:39.618662 | orchestrator | 2025-05-05 01:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:39.618868 | orchestrator | 2025-05-05 01:46:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:42.669000 | orchestrator | 2025-05-05 01:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:42.669111 | orchestrator | 2025-05-05 01:46:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:45.717168 | orchestrator | 2025-05-05 01:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:45.717315 | orchestrator | 2025-05-05 01:46:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:48.767993 | orchestrator | 2025-05-05 01:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:48.768128 | orchestrator | 2025-05-05 01:46:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:51.818394 | orchestrator | 2025-05-05 01:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:51.818539 | orchestrator | 2025-05-05 01:46:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:54.864693 | orchestrator | 2025-05-05 01:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:54.864903 | orchestrator | 2025-05-05 01:46:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:46:57.921119 | orchestrator | 2025-05-05 01:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:46:57.921266 | orchestrator | 2025-05-05 01:46:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:00.979095 | orchestrator | 2025-05-05 01:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:00.979207 | orchestrator | 2025-05-05 01:47:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:04.026099 | orchestrator | 2025-05-05 01:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:04.026251 | orchestrator | 2025-05-05 01:47:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:07.071831 | orchestrator | 2025-05-05 01:47:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:07.071973 | orchestrator | 2025-05-05 01:47:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:10.112305 | orchestrator | 2025-05-05 01:47:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:10.112486 | orchestrator | 2025-05-05 01:47:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:13.156711 | orchestrator | 2025-05-05 01:47:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:13.156903 | orchestrator | 2025-05-05 01:47:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:16.207693 | orchestrator | 2025-05-05 01:47:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:16.207880 | orchestrator | 2025-05-05 01:47:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:19.255970 | orchestrator | 2025-05-05 01:47:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:19.256112 | orchestrator | 2025-05-05 01:47:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:22.310467 | orchestrator | 2025-05-05 01:47:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:22.310619 | orchestrator | 2025-05-05 01:47:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:25.358379 | orchestrator | 2025-05-05 01:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:25.358518 | orchestrator | 2025-05-05 01:47:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:28.405575 | orchestrator | 2025-05-05 01:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:28.405722 | orchestrator | 2025-05-05 01:47:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:31.452201 | orchestrator | 2025-05-05 01:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:31.452362 | orchestrator | 2025-05-05 01:47:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:34.495147 | orchestrator | 2025-05-05 01:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:34.495292 | orchestrator | 2025-05-05 01:47:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:37.540559 | orchestrator | 2025-05-05 01:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:37.540714 | orchestrator | 2025-05-05 01:47:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:40.588497 | orchestrator | 2025-05-05 01:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:40.588641 | orchestrator | 2025-05-05 01:47:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:43.637755 | orchestrator | 2025-05-05 01:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:43.637952 | orchestrator | 2025-05-05 01:47:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:46.679033 | orchestrator | 2025-05-05 01:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:46.679180 | orchestrator | 2025-05-05 01:47:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:49.731065 | orchestrator | 2025-05-05 01:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:49.731249 | orchestrator | 2025-05-05 01:47:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:52.786594 | orchestrator | 2025-05-05 01:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:52.786744 | orchestrator | 2025-05-05 01:47:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:55.834569 | orchestrator | 2025-05-05 01:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:55.834708 | orchestrator | 2025-05-05 01:47:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:47:58.883747 | orchestrator | 2025-05-05 01:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:47:58.883949 | orchestrator | 2025-05-05 01:47:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:01.928430 | orchestrator | 2025-05-05 01:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:01.928571 | orchestrator | 2025-05-05 01:48:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:04.974776 | orchestrator | 2025-05-05 01:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:04.975007 | orchestrator | 2025-05-05 01:48:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:08.023020 | orchestrator | 2025-05-05 01:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:08.023158 | orchestrator | 2025-05-05 01:48:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:11.072154 | orchestrator | 2025-05-05 01:48:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:11.072293 | orchestrator | 2025-05-05 01:48:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:14.122088 | orchestrator | 2025-05-05 01:48:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:14.122274 | orchestrator | 2025-05-05 01:48:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:17.172322 | orchestrator | 2025-05-05 01:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:17.172464 | orchestrator | 2025-05-05 01:48:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:20.220487 | orchestrator | 2025-05-05 01:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:20.220648 | orchestrator | 2025-05-05 01:48:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:23.262936 | orchestrator | 2025-05-05 01:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:23.263077 | orchestrator | 2025-05-05 01:48:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:26.307512 | orchestrator | 2025-05-05 01:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:26.307662 | orchestrator | 2025-05-05 01:48:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:29.359457 | orchestrator | 2025-05-05 01:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:29.359591 | orchestrator | 2025-05-05 01:48:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:32.401756 | orchestrator | 2025-05-05 01:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:32.401981 | orchestrator | 2025-05-05 01:48:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:35.447730 | orchestrator | 2025-05-05 01:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:35.447944 | orchestrator | 2025-05-05 01:48:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:38.495629 | orchestrator | 2025-05-05 01:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:38.495776 | orchestrator | 2025-05-05 01:48:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:41.544470 | orchestrator | 2025-05-05 01:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:41.544609 | orchestrator | 2025-05-05 01:48:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:44.593345 | orchestrator | 2025-05-05 01:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:44.593522 | orchestrator | 2025-05-05 01:48:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:47.647057 | orchestrator | 2025-05-05 01:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:47.647208 | orchestrator | 2025-05-05 01:48:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:50.696467 | orchestrator | 2025-05-05 01:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:50.696612 | orchestrator | 2025-05-05 01:48:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:53.745437 | orchestrator | 2025-05-05 01:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:53.745572 | orchestrator | 2025-05-05 01:48:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:56.796197 | orchestrator | 2025-05-05 01:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:56.796344 | orchestrator | 2025-05-05 01:48:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:48:59.844681 | orchestrator | 2025-05-05 01:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:48:59.844809 | orchestrator | 2025-05-05 01:48:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:02.884070 | orchestrator | 2025-05-05 01:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:02.884215 | orchestrator | 2025-05-05 01:49:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:05.937558 | orchestrator | 2025-05-05 01:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:05.937699 | orchestrator | 2025-05-05 01:49:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:08.990711 | orchestrator | 2025-05-05 01:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:08.990852 | orchestrator | 2025-05-05 01:49:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:12.047178 | orchestrator | 2025-05-05 01:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:12.047269 | orchestrator | 2025-05-05 01:49:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:15.092525 | orchestrator | 2025-05-05 01:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:15.092740 | orchestrator | 2025-05-05 01:49:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:18.140476 | orchestrator | 2025-05-05 01:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:18.140623 | orchestrator | 2025-05-05 01:49:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:21.188809 | orchestrator | 2025-05-05 01:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:21.189094 | orchestrator | 2025-05-05 01:49:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:24.240477 | orchestrator | 2025-05-05 01:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:24.240644 | orchestrator | 2025-05-05 01:49:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:27.283041 | orchestrator | 2025-05-05 01:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:27.283196 | orchestrator | 2025-05-05 01:49:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:30.335144 | orchestrator | 2025-05-05 01:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:30.335286 | orchestrator | 2025-05-05 01:49:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:33.382558 | orchestrator | 2025-05-05 01:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:33.382710 | orchestrator | 2025-05-05 01:49:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:36.425408 | orchestrator | 2025-05-05 01:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:36.425555 | orchestrator | 2025-05-05 01:49:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:39.472872 | orchestrator | 2025-05-05 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:39.473059 | orchestrator | 2025-05-05 01:49:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:42.520884 | orchestrator | 2025-05-05 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:42.521083 | orchestrator | 2025-05-05 01:49:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:45.572159 | orchestrator | 2025-05-05 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:45.572314 | orchestrator | 2025-05-05 01:49:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:48.620842 | orchestrator | 2025-05-05 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:48.621056 | orchestrator | 2025-05-05 01:49:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:51.668084 | orchestrator | 2025-05-05 01:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:51.668250 | orchestrator | 2025-05-05 01:49:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:54.720610 | orchestrator | 2025-05-05 01:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:54.720760 | orchestrator | 2025-05-05 01:49:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:49:57.776745 | orchestrator | 2025-05-05 01:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:49:57.776896 | orchestrator | 2025-05-05 01:49:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:00.824907 | orchestrator | 2025-05-05 01:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:00.825134 | orchestrator | 2025-05-05 01:50:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:03.875359 | orchestrator | 2025-05-05 01:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:03.875506 | orchestrator | 2025-05-05 01:50:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:06.920327 | orchestrator | 2025-05-05 01:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:06.920475 | orchestrator | 2025-05-05 01:50:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:09.969192 | orchestrator | 2025-05-05 01:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:09.969338 | orchestrator | 2025-05-05 01:50:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:13.018159 | orchestrator | 2025-05-05 01:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:13.018296 | orchestrator | 2025-05-05 01:50:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:16.059498 | orchestrator | 2025-05-05 01:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:16.059649 | orchestrator | 2025-05-05 01:50:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:19.106547 | orchestrator | 2025-05-05 01:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:19.106717 | orchestrator | 2025-05-05 01:50:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:22.156144 | orchestrator | 2025-05-05 01:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:22.156291 | orchestrator | 2025-05-05 01:50:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:25.208801 | orchestrator | 2025-05-05 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:25.208918 | orchestrator | 2025-05-05 01:50:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:28.260284 | orchestrator | 2025-05-05 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:28.260402 | orchestrator | 2025-05-05 01:50:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:31.302477 | orchestrator | 2025-05-05 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:31.302633 | orchestrator | 2025-05-05 01:50:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:34.349203 | orchestrator | 2025-05-05 01:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:34.349346 | orchestrator | 2025-05-05 01:50:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:37.401974 | orchestrator | 2025-05-05 01:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:37.402166 | orchestrator | 2025-05-05 01:50:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:40.445418 | orchestrator | 2025-05-05 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:40.445584 | orchestrator | 2025-05-05 01:50:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:43.498905 | orchestrator | 2025-05-05 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:43.499090 | orchestrator | 2025-05-05 01:50:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:46.544423 | orchestrator | 2025-05-05 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:46.544592 | orchestrator | 2025-05-05 01:50:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:49.595237 | orchestrator | 2025-05-05 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:49.595348 | orchestrator | 2025-05-05 01:50:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:52.640263 | orchestrator | 2025-05-05 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:52.641134 | orchestrator | 2025-05-05 01:50:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:55.686205 | orchestrator | 2025-05-05 01:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:55.686372 | orchestrator | 2025-05-05 01:50:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:50:58.737029 | orchestrator | 2025-05-05 01:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:50:58.737187 | orchestrator | 2025-05-05 01:50:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:01.782922 | orchestrator | 2025-05-05 01:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:01.783142 | orchestrator | 2025-05-05 01:51:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:04.835923 | orchestrator | 2025-05-05 01:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:04.836158 | orchestrator | 2025-05-05 01:51:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:07.888880 | orchestrator | 2025-05-05 01:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:07.889085 | orchestrator | 2025-05-05 01:51:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:10.928597 | orchestrator | 2025-05-05 01:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:10.928745 | orchestrator | 2025-05-05 01:51:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:13.973206 | orchestrator | 2025-05-05 01:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:13.973370 | orchestrator | 2025-05-05 01:51:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:17.028286 | orchestrator | 2025-05-05 01:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:17.028427 | orchestrator | 2025-05-05 01:51:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:20.077746 | orchestrator | 2025-05-05 01:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:20.077947 | orchestrator | 2025-05-05 01:51:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:23.125702 | orchestrator | 2025-05-05 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:23.125850 | orchestrator | 2025-05-05 01:51:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:26.175816 | orchestrator | 2025-05-05 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:26.175959 | orchestrator | 2025-05-05 01:51:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:29.230471 | orchestrator | 2025-05-05 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:29.230629 | orchestrator | 2025-05-05 01:51:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:32.284937 | orchestrator | 2025-05-05 01:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:32.285142 | orchestrator | 2025-05-05 01:51:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:35.337186 | orchestrator | 2025-05-05 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:35.337353 | orchestrator | 2025-05-05 01:51:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:38.383238 | orchestrator | 2025-05-05 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:38.383350 | orchestrator | 2025-05-05 01:51:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:41.426865 | orchestrator | 2025-05-05 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:41.427198 | orchestrator | 2025-05-05 01:51:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:44.478226 | orchestrator | 2025-05-05 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:44.478354 | orchestrator | 2025-05-05 01:51:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:47.523740 | orchestrator | 2025-05-05 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:47.523897 | orchestrator | 2025-05-05 01:51:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:50.572088 | orchestrator | 2025-05-05 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:50.572264 | orchestrator | 2025-05-05 01:51:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:53.624801 | orchestrator | 2025-05-05 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:53.624940 | orchestrator | 2025-05-05 01:51:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:56.670675 | orchestrator | 2025-05-05 01:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:56.670821 | orchestrator | 2025-05-05 01:51:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:51:59.714158 | orchestrator | 2025-05-05 01:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:51:59.715194 | orchestrator | 2025-05-05 01:51:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:02.754437 | orchestrator | 2025-05-05 01:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:02.754610 | orchestrator | 2025-05-05 01:52:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:05.801889 | orchestrator | 2025-05-05 01:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:05.802129 | orchestrator | 2025-05-05 01:52:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:08.848267 | orchestrator | 2025-05-05 01:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:08.848421 | orchestrator | 2025-05-05 01:52:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:11.896461 | orchestrator | 2025-05-05 01:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:11.896608 | orchestrator | 2025-05-05 01:52:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:14.947224 | orchestrator | 2025-05-05 01:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:14.947360 | orchestrator | 2025-05-05 01:52:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:17.996219 | orchestrator | 2025-05-05 01:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:17.996373 | orchestrator | 2025-05-05 01:52:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:21.042771 | orchestrator | 2025-05-05 01:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:21.042918 | orchestrator | 2025-05-05 01:52:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:24.091888 | orchestrator | 2025-05-05 01:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:24.092084 | orchestrator | 2025-05-05 01:52:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:27.140961 | orchestrator | 2025-05-05 01:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:27.141142 | orchestrator | 2025-05-05 01:52:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:30.194970 | orchestrator | 2025-05-05 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:30.195172 | orchestrator | 2025-05-05 01:52:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:33.237944 | orchestrator | 2025-05-05 01:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:33.238185 | orchestrator | 2025-05-05 01:52:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:36.289161 | orchestrator | 2025-05-05 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:36.289339 | orchestrator | 2025-05-05 01:52:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:39.340257 | orchestrator | 2025-05-05 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:39.340395 | orchestrator | 2025-05-05 01:52:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:39.341133 | orchestrator | 2025-05-05 01:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:42.386262 | orchestrator | 2025-05-05 01:52:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:45.432015 | orchestrator | 2025-05-05 01:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:45.432150 | orchestrator | 2025-05-05 01:52:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:48.480541 | orchestrator | 2025-05-05 01:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:48.480686 | orchestrator | 2025-05-05 01:52:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:51.529207 | orchestrator | 2025-05-05 01:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:51.529351 | orchestrator | 2025-05-05 01:52:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:54.579484 | orchestrator | 2025-05-05 01:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:54.579633 | orchestrator | 2025-05-05 01:52:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:52:57.619709 | orchestrator | 2025-05-05 01:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:52:57.619859 | orchestrator | 2025-05-05 01:52:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:00.664988 | orchestrator | 2025-05-05 01:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:00.665175 | orchestrator | 2025-05-05 01:53:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:03.726495 | orchestrator | 2025-05-05 01:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:03.726737 | orchestrator | 2025-05-05 01:53:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:03.727878 | orchestrator | 2025-05-05 01:53:03 | INFO  | Task 2e082eda-db3a-409f-a357-d1a52fc529e4 is in state STARTED 2025-05-05 01:53:06.781977 | orchestrator | 2025-05-05 01:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:06.782178 | orchestrator | 2025-05-05 01:53:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:06.783337 | orchestrator | 2025-05-05 01:53:06 | INFO  | Task 2e082eda-db3a-409f-a357-d1a52fc529e4 is in state STARTED 2025-05-05 01:53:06.783637 | orchestrator | 2025-05-05 01:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:09.841301 | orchestrator | 2025-05-05 01:53:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:09.842840 | orchestrator | 2025-05-05 01:53:09 | INFO  | Task 2e082eda-db3a-409f-a357-d1a52fc529e4 is in state STARTED 2025-05-05 01:53:12.896116 | orchestrator | 2025-05-05 01:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:12.896267 | orchestrator | 2025-05-05 01:53:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:12.896947 | orchestrator | 2025-05-05 01:53:12 | INFO  | Task 2e082eda-db3a-409f-a357-d1a52fc529e4 is in state SUCCESS 2025-05-05 01:53:15.942326 | orchestrator | 2025-05-05 01:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:15.942466 | orchestrator | 2025-05-05 01:53:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:18.985432 | orchestrator | 2025-05-05 01:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:18.985580 | orchestrator | 2025-05-05 01:53:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:22.033242 | orchestrator | 2025-05-05 01:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:22.033380 | orchestrator | 2025-05-05 01:53:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:25.072918 | orchestrator | 2025-05-05 01:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:25.073098 | orchestrator | 2025-05-05 01:53:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:28.122371 | orchestrator | 2025-05-05 01:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:28.122501 | orchestrator | 2025-05-05 01:53:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:31.175109 | orchestrator | 2025-05-05 01:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:31.175291 | orchestrator | 2025-05-05 01:53:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:34.226339 | orchestrator | 2025-05-05 01:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:34.226479 | orchestrator | 2025-05-05 01:53:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:37.270939 | orchestrator | 2025-05-05 01:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:37.271144 | orchestrator | 2025-05-05 01:53:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:40.320460 | orchestrator | 2025-05-05 01:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:40.320600 | orchestrator | 2025-05-05 01:53:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:43.371221 | orchestrator | 2025-05-05 01:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:43.371397 | orchestrator | 2025-05-05 01:53:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:46.423705 | orchestrator | 2025-05-05 01:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:46.423866 | orchestrator | 2025-05-05 01:53:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:49.467443 | orchestrator | 2025-05-05 01:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:49.467584 | orchestrator | 2025-05-05 01:53:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:52.512143 | orchestrator | 2025-05-05 01:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:52.512288 | orchestrator | 2025-05-05 01:53:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:55.560753 | orchestrator | 2025-05-05 01:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:55.560897 | orchestrator | 2025-05-05 01:53:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:53:58.608068 | orchestrator | 2025-05-05 01:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:53:58.608279 | orchestrator | 2025-05-05 01:53:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:01.654998 | orchestrator | 2025-05-05 01:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:01.655151 | orchestrator | 2025-05-05 01:54:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:04.702755 | orchestrator | 2025-05-05 01:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:04.702893 | orchestrator | 2025-05-05 01:54:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:07.758413 | orchestrator | 2025-05-05 01:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:07.758545 | orchestrator | 2025-05-05 01:54:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:10.803340 | orchestrator | 2025-05-05 01:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:10.803505 | orchestrator | 2025-05-05 01:54:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:13.851628 | orchestrator | 2025-05-05 01:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:13.851780 | orchestrator | 2025-05-05 01:54:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:16.897616 | orchestrator | 2025-05-05 01:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:16.897753 | orchestrator | 2025-05-05 01:54:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:19.946983 | orchestrator | 2025-05-05 01:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:19.947152 | orchestrator | 2025-05-05 01:54:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:22.996888 | orchestrator | 2025-05-05 01:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:22.997083 | orchestrator | 2025-05-05 01:54:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:26.038011 | orchestrator | 2025-05-05 01:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:26.038273 | orchestrator | 2025-05-05 01:54:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:29.085322 | orchestrator | 2025-05-05 01:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:29.085463 | orchestrator | 2025-05-05 01:54:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:32.123108 | orchestrator | 2025-05-05 01:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:32.123262 | orchestrator | 2025-05-05 01:54:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:35.168185 | orchestrator | 2025-05-05 01:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:35.168335 | orchestrator | 2025-05-05 01:54:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:38.216267 | orchestrator | 2025-05-05 01:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:38.216412 | orchestrator | 2025-05-05 01:54:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:41.263100 | orchestrator | 2025-05-05 01:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:41.263242 | orchestrator | 2025-05-05 01:54:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:44.308937 | orchestrator | 2025-05-05 01:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:44.309093 | orchestrator | 2025-05-05 01:54:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:47.357681 | orchestrator | 2025-05-05 01:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:47.357824 | orchestrator | 2025-05-05 01:54:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:50.408375 | orchestrator | 2025-05-05 01:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:50.408514 | orchestrator | 2025-05-05 01:54:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:53.457114 | orchestrator | 2025-05-05 01:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:53.457267 | orchestrator | 2025-05-05 01:54:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:56.510382 | orchestrator | 2025-05-05 01:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:56.510524 | orchestrator | 2025-05-05 01:54:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:54:59.557709 | orchestrator | 2025-05-05 01:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:54:59.557988 | orchestrator | 2025-05-05 01:54:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:02.606601 | orchestrator | 2025-05-05 01:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:02.606755 | orchestrator | 2025-05-05 01:55:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:05.662015 | orchestrator | 2025-05-05 01:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:05.662220 | orchestrator | 2025-05-05 01:55:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:08.716916 | orchestrator | 2025-05-05 01:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:08.717073 | orchestrator | 2025-05-05 01:55:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:11.767372 | orchestrator | 2025-05-05 01:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:11.767512 | orchestrator | 2025-05-05 01:55:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:14.810299 | orchestrator | 2025-05-05 01:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:14.810416 | orchestrator | 2025-05-05 01:55:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:17.859015 | orchestrator | 2025-05-05 01:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:17.859161 | orchestrator | 2025-05-05 01:55:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:20.908192 | orchestrator | 2025-05-05 01:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:20.908336 | orchestrator | 2025-05-05 01:55:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:23.956175 | orchestrator | 2025-05-05 01:55:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:23.956314 | orchestrator | 2025-05-05 01:55:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:26.999149 | orchestrator | 2025-05-05 01:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:26.999297 | orchestrator | 2025-05-05 01:55:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:30.043923 | orchestrator | 2025-05-05 01:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:30.044062 | orchestrator | 2025-05-05 01:55:30 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:33.082757 | orchestrator | 2025-05-05 01:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:33.082924 | orchestrator | 2025-05-05 01:55:33 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:36.128430 | orchestrator | 2025-05-05 01:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:36.128579 | orchestrator | 2025-05-05 01:55:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:39.175634 | orchestrator | 2025-05-05 01:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:39.175734 | orchestrator | 2025-05-05 01:55:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:42.222338 | orchestrator | 2025-05-05 01:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:42.222479 | orchestrator | 2025-05-05 01:55:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:45.269563 | orchestrator | 2025-05-05 01:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:45.269708 | orchestrator | 2025-05-05 01:55:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:48.317717 | orchestrator | 2025-05-05 01:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:48.317929 | orchestrator | 2025-05-05 01:55:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:51.367122 | orchestrator | 2025-05-05 01:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:51.367272 | orchestrator | 2025-05-05 01:55:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:54.418644 | orchestrator | 2025-05-05 01:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:54.418815 | orchestrator | 2025-05-05 01:55:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:55:57.467105 | orchestrator | 2025-05-05 01:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:55:57.467251 | orchestrator | 2025-05-05 01:55:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:00.519738 | orchestrator | 2025-05-05 01:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:00.519909 | orchestrator | 2025-05-05 01:56:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:03.566479 | orchestrator | 2025-05-05 01:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:03.566625 | orchestrator | 2025-05-05 01:56:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:06.617236 | orchestrator | 2025-05-05 01:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:06.617381 | orchestrator | 2025-05-05 01:56:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:09.671646 | orchestrator | 2025-05-05 01:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:09.671834 | orchestrator | 2025-05-05 01:56:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:12.718234 | orchestrator | 2025-05-05 01:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:12.718375 | orchestrator | 2025-05-05 01:56:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:15.773372 | orchestrator | 2025-05-05 01:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:15.773530 | orchestrator | 2025-05-05 01:56:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:18.826419 | orchestrator | 2025-05-05 01:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:18.826606 | orchestrator | 2025-05-05 01:56:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:21.884605 | orchestrator | 2025-05-05 01:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:21.884748 | orchestrator | 2025-05-05 01:56:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:24.933198 | orchestrator | 2025-05-05 01:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:24.933343 | orchestrator | 2025-05-05 01:56:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:27.982125 | orchestrator | 2025-05-05 01:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:27.982258 | orchestrator | 2025-05-05 01:56:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:31.035684 | orchestrator | 2025-05-05 01:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:31.035855 | orchestrator | 2025-05-05 01:56:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:34.085698 | orchestrator | 2025-05-05 01:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:34.085866 | orchestrator | 2025-05-05 01:56:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:37.129227 | orchestrator | 2025-05-05 01:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:37.129415 | orchestrator | 2025-05-05 01:56:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:40.176813 | orchestrator | 2025-05-05 01:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:40.176961 | orchestrator | 2025-05-05 01:56:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:43.227136 | orchestrator | 2025-05-05 01:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:43.227279 | orchestrator | 2025-05-05 01:56:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:46.276076 | orchestrator | 2025-05-05 01:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:46.276220 | orchestrator | 2025-05-05 01:56:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:49.318358 | orchestrator | 2025-05-05 01:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:49.318505 | orchestrator | 2025-05-05 01:56:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:52.373355 | orchestrator | 2025-05-05 01:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:52.373499 | orchestrator | 2025-05-05 01:56:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:55.420135 | orchestrator | 2025-05-05 01:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:55.420280 | orchestrator | 2025-05-05 01:56:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:56:58.474535 | orchestrator | 2025-05-05 01:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:56:58.474680 | orchestrator | 2025-05-05 01:56:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:01.526851 | orchestrator | 2025-05-05 01:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:01.526998 | orchestrator | 2025-05-05 01:57:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:04.574851 | orchestrator | 2025-05-05 01:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:04.574992 | orchestrator | 2025-05-05 01:57:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:07.629130 | orchestrator | 2025-05-05 01:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:07.629898 | orchestrator | 2025-05-05 01:57:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:10.673476 | orchestrator | 2025-05-05 01:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:10.673594 | orchestrator | 2025-05-05 01:57:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:13.720251 | orchestrator | 2025-05-05 01:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:13.720393 | orchestrator | 2025-05-05 01:57:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:16.770896 | orchestrator | 2025-05-05 01:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:16.771036 | orchestrator | 2025-05-05 01:57:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:19.818444 | orchestrator | 2025-05-05 01:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:19.818590 | orchestrator | 2025-05-05 01:57:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:22.868834 | orchestrator | 2025-05-05 01:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:22.868976 | orchestrator | 2025-05-05 01:57:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:25.916069 | orchestrator | 2025-05-05 01:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:25.916220 | orchestrator | 2025-05-05 01:57:25 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:28.959080 | orchestrator | 2025-05-05 01:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:28.959244 | orchestrator | 2025-05-05 01:57:28 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:32.009430 | orchestrator | 2025-05-05 01:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:32.009577 | orchestrator | 2025-05-05 01:57:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:35.057608 | orchestrator | 2025-05-05 01:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:35.057802 | orchestrator | 2025-05-05 01:57:35 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:38.108308 | orchestrator | 2025-05-05 01:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:38.108477 | orchestrator | 2025-05-05 01:57:38 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:41.155414 | orchestrator | 2025-05-05 01:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:41.155575 | orchestrator | 2025-05-05 01:57:41 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:44.203522 | orchestrator | 2025-05-05 01:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:44.203734 | orchestrator | 2025-05-05 01:57:44 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:47.244811 | orchestrator | 2025-05-05 01:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:47.244907 | orchestrator | 2025-05-05 01:57:47 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:50.292409 | orchestrator | 2025-05-05 01:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:50.292547 | orchestrator | 2025-05-05 01:57:50 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:53.344396 | orchestrator | 2025-05-05 01:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:53.344542 | orchestrator | 2025-05-05 01:57:53 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:56.395311 | orchestrator | 2025-05-05 01:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:56.395454 | orchestrator | 2025-05-05 01:57:56 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:57:59.445117 | orchestrator | 2025-05-05 01:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:57:59.445257 | orchestrator | 2025-05-05 01:57:59 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:02.482632 | orchestrator | 2025-05-05 01:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:02.482856 | orchestrator | 2025-05-05 01:58:02 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:05.524865 | orchestrator | 2025-05-05 01:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:05.525774 | orchestrator | 2025-05-05 01:58:05 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:08.579078 | orchestrator | 2025-05-05 01:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:08.579247 | orchestrator | 2025-05-05 01:58:08 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:11.628396 | orchestrator | 2025-05-05 01:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:11.629258 | orchestrator | 2025-05-05 01:58:11 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:14.670780 | orchestrator | 2025-05-05 01:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:14.670894 | orchestrator | 2025-05-05 01:58:14 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:17.719550 | orchestrator | 2025-05-05 01:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:17.719741 | orchestrator | 2025-05-05 01:58:17 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:20.767974 | orchestrator | 2025-05-05 01:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:20.768112 | orchestrator | 2025-05-05 01:58:20 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:23.819938 | orchestrator | 2025-05-05 01:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:23.820102 | orchestrator | 2025-05-05 01:58:23 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:26.870098 | orchestrator | 2025-05-05 01:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:26.870241 | orchestrator | 2025-05-05 01:58:26 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:29.916933 | orchestrator | 2025-05-05 01:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:29.917082 | orchestrator | 2025-05-05 01:58:29 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:32.962906 | orchestrator | 2025-05-05 01:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:32.963045 | orchestrator | 2025-05-05 01:58:32 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:36.014273 | orchestrator | 2025-05-05 01:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:36.014435 | orchestrator | 2025-05-05 01:58:36 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:39.063582 | orchestrator | 2025-05-05 01:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:39.063778 | orchestrator | 2025-05-05 01:58:39 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:42.106081 | orchestrator | 2025-05-05 01:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:42.106228 | orchestrator | 2025-05-05 01:58:42 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:45.153032 | orchestrator | 2025-05-05 01:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:45.153209 | orchestrator | 2025-05-05 01:58:45 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:48.206511 | orchestrator | 2025-05-05 01:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:48.206717 | orchestrator | 2025-05-05 01:58:48 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:51.261023 | orchestrator | 2025-05-05 01:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:51.261171 | orchestrator | 2025-05-05 01:58:51 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:54.326336 | orchestrator | 2025-05-05 01:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:54.326497 | orchestrator | 2025-05-05 01:58:54 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:58:57.383218 | orchestrator | 2025-05-05 01:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:58:57.383359 | orchestrator | 2025-05-05 01:58:57 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:00.424943 | orchestrator | 2025-05-05 01:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:00.425089 | orchestrator | 2025-05-05 01:59:00 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:03.483811 | orchestrator | 2025-05-05 01:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:03.483953 | orchestrator | 2025-05-05 01:59:03 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:06.539952 | orchestrator | 2025-05-05 01:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:06.540100 | orchestrator | 2025-05-05 01:59:06 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:09.599446 | orchestrator | 2025-05-05 01:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:09.599591 | orchestrator | 2025-05-05 01:59:09 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:12.651275 | orchestrator | 2025-05-05 01:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:12.651441 | orchestrator | 2025-05-05 01:59:12 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:15.707187 | orchestrator | 2025-05-05 01:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:15.707330 | orchestrator | 2025-05-05 01:59:15 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:18.763785 | orchestrator | 2025-05-05 01:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:18.763938 | orchestrator | 2025-05-05 01:59:18 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:21.817376 | orchestrator | 2025-05-05 01:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:21.817521 | orchestrator | 2025-05-05 01:59:21 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:24.881882 | orchestrator | 2025-05-05 01:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:24.882086 | orchestrator | 2025-05-05 01:59:24 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:27.947300 | orchestrator | 2025-05-05 01:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:27.947439 | orchestrator | 2025-05-05 01:59:27 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:31.018438 | orchestrator | 2025-05-05 01:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:31.018595 | orchestrator | 2025-05-05 01:59:31 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:34.069054 | orchestrator | 2025-05-05 01:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:34.069195 | orchestrator | 2025-05-05 01:59:34 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:37.130532 | orchestrator | 2025-05-05 01:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:37.130759 | orchestrator | 2025-05-05 01:59:37 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:40.199609 | orchestrator | 2025-05-05 01:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:40.199775 | orchestrator | 2025-05-05 01:59:40 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:43.252559 | orchestrator | 2025-05-05 01:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:43.252765 | orchestrator | 2025-05-05 01:59:43 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:46.309244 | orchestrator | 2025-05-05 01:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:46.309394 | orchestrator | 2025-05-05 01:59:46 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:49.354311 | orchestrator | 2025-05-05 01:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:49.354440 | orchestrator | 2025-05-05 01:59:49 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:52.406243 | orchestrator | 2025-05-05 01:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:52.406398 | orchestrator | 2025-05-05 01:59:52 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:55.456652 | orchestrator | 2025-05-05 01:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:55.456805 | orchestrator | 2025-05-05 01:59:55 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 01:59:58.499202 | orchestrator | 2025-05-05 01:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-05 01:59:58.499344 | orchestrator | 2025-05-05 01:59:58 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:01.564055 | orchestrator | 2025-05-05 01:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:01.564207 | orchestrator | 2025-05-05 02:00:01 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:04.614172 | orchestrator | 2025-05-05 02:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:04.614304 | orchestrator | 2025-05-05 02:00:04 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:07.667508 | orchestrator | 2025-05-05 02:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:07.667691 | orchestrator | 2025-05-05 02:00:07 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:10.725922 | orchestrator | 2025-05-05 02:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:10.726142 | orchestrator | 2025-05-05 02:00:10 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:13.773304 | orchestrator | 2025-05-05 02:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:13.773448 | orchestrator | 2025-05-05 02:00:13 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:16.826210 | orchestrator | 2025-05-05 02:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:16.826350 | orchestrator | 2025-05-05 02:00:16 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:19.872271 | orchestrator | 2025-05-05 02:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:19.872408 | orchestrator | 2025-05-05 02:00:19 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:22.926098 | orchestrator | 2025-05-05 02:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-05 02:00:22.926241 | orchestrator | 2025-05-05 02:00:22 | INFO  | Task f23b16f3-902d-408a-ad02-63f3cf4dba3e is in state STARTED 2025-05-05 02:00:24.816188 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-05 02:00:24.821536 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-05 02:00:25.535492 | 2025-05-05 02:00:25.535673 | PLAY [Post output play] 2025-05-05 02:00:25.566581 | 2025-05-05 02:00:25.566725 | LOOP [stage-output : Register sources] 2025-05-05 02:00:25.644347 | 2025-05-05 02:00:25.644671 | TASK [stage-output : Check sudo] 2025-05-05 02:00:26.363374 | orchestrator | sudo: a password is required 2025-05-05 02:00:26.687178 | orchestrator | ok: Runtime: 0:00:00.015231 2025-05-05 02:00:26.706975 | 2025-05-05 02:00:26.707154 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-05 02:00:26.760339 | 2025-05-05 02:00:26.760704 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-05 02:00:26.853550 | orchestrator | ok 2025-05-05 02:00:26.863861 | 2025-05-05 02:00:26.864000 | LOOP [stage-output : Ensure target folders exist] 2025-05-05 02:00:27.332521 | orchestrator | ok: "docs" 2025-05-05 02:00:27.332868 | 2025-05-05 02:00:27.588474 | orchestrator | ok: "artifacts" 2025-05-05 02:00:27.828910 | orchestrator | ok: "logs" 2025-05-05 02:00:27.853303 | 2025-05-05 02:00:27.853446 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-05 02:00:27.895581 | 2025-05-05 02:00:27.895837 | TASK [stage-output : Make all log files readable] 2025-05-05 02:00:28.171589 | orchestrator | ok 2025-05-05 02:00:28.182166 | 2025-05-05 02:00:28.182323 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-05 02:00:28.228366 | orchestrator | skipping: Conditional result was False 2025-05-05 02:00:28.243983 | 2025-05-05 02:00:28.244130 | TASK [stage-output : Discover log files for compression] 2025-05-05 02:00:28.269534 | orchestrator | skipping: Conditional result was False 2025-05-05 02:00:28.285364 | 2025-05-05 02:00:28.285499 | LOOP [stage-output : Archive everything from logs] 2025-05-05 02:00:28.363574 | 2025-05-05 02:00:28.363735 | PLAY [Post cleanup play] 2025-05-05 02:00:28.387695 | 2025-05-05 02:00:28.387807 | TASK [Set cloud fact (Zuul deployment)] 2025-05-05 02:00:28.458526 | orchestrator | ok 2025-05-05 02:00:28.471196 | 2025-05-05 02:00:28.471345 | TASK [Set cloud fact (local deployment)] 2025-05-05 02:00:28.506333 | orchestrator | skipping: Conditional result was False 2025-05-05 02:00:28.523537 | 2025-05-05 02:00:28.523678 | TASK [Clean the cloud environment] 2025-05-05 02:00:29.147727 | orchestrator | 2025-05-05 02:00:29 - clean up servers 2025-05-05 02:00:30.119319 | orchestrator | 2025-05-05 02:00:30 - testbed-manager 2025-05-05 02:00:31.194341 | orchestrator | 2025-05-05 02:00:31 - testbed-node-5 2025-05-05 02:00:31.306991 | orchestrator | 2025-05-05 02:00:31 - testbed-node-4 2025-05-05 02:00:31.417890 | orchestrator | 2025-05-05 02:00:31 - testbed-node-0 2025-05-05 02:00:31.529314 | orchestrator | 2025-05-05 02:00:31 - testbed-node-1 2025-05-05 02:00:31.642388 | orchestrator | 2025-05-05 02:00:31 - testbed-node-3 2025-05-05 02:00:31.754006 | orchestrator | 2025-05-05 02:00:31 - testbed-node-2 2025-05-05 02:00:31.848302 | orchestrator | 2025-05-05 02:00:31 - clean up keypairs 2025-05-05 02:00:31.864992 | orchestrator | 2025-05-05 02:00:31 - testbed 2025-05-05 02:00:31.893079 | orchestrator | 2025-05-05 02:00:31 - wait for servers to be gone 2025-05-05 02:00:38.783880 | orchestrator | 2025-05-05 02:00:38 - clean up ports 2025-05-05 02:00:39.004131 | orchestrator | 2025-05-05 02:00:39 - 32d7ff85-d588-4a83-86a1-d6228b9bd83d 2025-05-05 02:00:39.411068 | orchestrator | 2025-05-05 02:00:39 - 40c386c7-a1a2-4bc1-a765-1cd0b02e3a79 2025-05-05 02:00:39.639843 | orchestrator | 2025-05-05 02:00:39 - 73a0f3a4-daf7-4c39-a9e4-19080753b1cb 2025-05-05 02:00:39.851948 | orchestrator | 2025-05-05 02:00:39 - 99b7a487-129a-4313-ba2d-3879ca3bb64b 2025-05-05 02:00:40.038183 | orchestrator | 2025-05-05 02:00:40 - a3b7930f-3241-4251-87d9-1f17959145bb 2025-05-05 02:00:40.225375 | orchestrator | 2025-05-05 02:00:40 - e53178c4-c1b5-4276-bf8e-ae958fbd9218 2025-05-05 02:00:40.443210 | orchestrator | 2025-05-05 02:00:40 - e75006bb-396b-4545-8d65-990c2c517039 2025-05-05 02:00:40.641181 | orchestrator | 2025-05-05 02:00:40 - clean up volumes 2025-05-05 02:00:40.804577 | orchestrator | 2025-05-05 02:00:40 - testbed-volume-1-node-base 2025-05-05 02:00:40.841560 | orchestrator | 2025-05-05 02:00:40 - testbed-volume-5-node-base 2025-05-05 02:00:40.884729 | orchestrator | 2025-05-05 02:00:40 - testbed-volume-0-node-base 2025-05-05 02:00:40.931748 | orchestrator | 2025-05-05 02:00:40 - testbed-volume-4-node-base 2025-05-05 02:00:40.970865 | orchestrator | 2025-05-05 02:00:40 - testbed-volume-manager-base 2025-05-05 02:00:41.009970 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-3-node-base 2025-05-05 02:00:41.081999 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-2-node-base 2025-05-05 02:00:41.121061 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-4-node-4 2025-05-05 02:00:41.161048 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-16-node-4 2025-05-05 02:00:41.199524 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-17-node-5 2025-05-05 02:00:41.239432 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-11-node-5 2025-05-05 02:00:41.282144 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-15-node-3 2025-05-05 02:00:41.325211 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-3-node-3 2025-05-05 02:00:41.365862 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-8-node-2 2025-05-05 02:00:41.406855 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-1-node-1 2025-05-05 02:00:41.451264 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-14-node-2 2025-05-05 02:00:41.490405 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-9-node-3 2025-05-05 02:00:41.532440 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-7-node-1 2025-05-05 02:00:41.572190 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-5-node-5 2025-05-05 02:00:41.615877 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-6-node-0 2025-05-05 02:00:41.659862 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-12-node-0 2025-05-05 02:00:41.698761 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-2-node-2 2025-05-05 02:00:41.739804 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-10-node-4 2025-05-05 02:00:41.779658 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-13-node-1 2025-05-05 02:00:41.820289 | orchestrator | 2025-05-05 02:00:41 - testbed-volume-0-node-0 2025-05-05 02:00:41.865063 | orchestrator | 2025-05-05 02:00:41 - disconnect routers 2025-05-05 02:00:41.922280 | orchestrator | 2025-05-05 02:00:41 - testbed 2025-05-05 02:00:42.689881 | orchestrator | 2025-05-05 02:00:42 - clean up subnets 2025-05-05 02:00:42.722516 | orchestrator | 2025-05-05 02:00:42 - subnet-testbed-management 2025-05-05 02:00:42.867752 | orchestrator | 2025-05-05 02:00:42 - clean up networks 2025-05-05 02:00:43.036161 | orchestrator | 2025-05-05 02:00:43 - net-testbed-management 2025-05-05 02:00:43.285052 | orchestrator | 2025-05-05 02:00:43 - clean up security groups 2025-05-05 02:00:43.333302 | orchestrator | 2025-05-05 02:00:43 - testbed-management 2025-05-05 02:00:43.418736 | orchestrator | 2025-05-05 02:00:43 - testbed-node 2025-05-05 02:00:43.499833 | orchestrator | 2025-05-05 02:00:43 - clean up floating ips 2025-05-05 02:00:43.533561 | orchestrator | 2025-05-05 02:00:43 - 81.163.192.165 2025-05-05 02:00:43.956337 | orchestrator | 2025-05-05 02:00:43 - clean up routers 2025-05-05 02:00:44.060649 | orchestrator | 2025-05-05 02:00:44 - testbed 2025-05-05 02:00:45.584512 | orchestrator | changed 2025-05-05 02:00:45.625490 | 2025-05-05 02:00:45.625610 | PLAY RECAP 2025-05-05 02:00:45.625666 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-05 02:00:45.625692 | 2025-05-05 02:00:45.738522 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-05 02:00:45.746729 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-05 02:00:46.452747 | 2025-05-05 02:00:46.452916 | PLAY [Base post-fetch] 2025-05-05 02:00:46.482908 | 2025-05-05 02:00:46.483049 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-05 02:00:46.550547 | orchestrator | skipping: Conditional result was False 2025-05-05 02:00:46.566553 | 2025-05-05 02:00:46.566749 | TASK [fetch-output : Set log path for single node] 2025-05-05 02:00:46.640026 | orchestrator | ok 2025-05-05 02:00:46.648983 | 2025-05-05 02:00:46.649102 | LOOP [fetch-output : Ensure local output dirs] 2025-05-05 02:00:47.163393 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/work/logs" 2025-05-05 02:00:47.452662 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/work/artifacts" 2025-05-05 02:00:47.722485 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a6ffc6a5efc64cf28e477909b96a1c4a/work/docs" 2025-05-05 02:00:47.749073 | 2025-05-05 02:00:47.749229 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-05 02:00:48.573444 | orchestrator | changed: .d..t...... ./ 2025-05-05 02:00:48.573791 | orchestrator | changed: All items complete 2025-05-05 02:00:48.573836 | 2025-05-05 02:00:49.175202 | orchestrator | changed: .d..t...... ./ 2025-05-05 02:00:49.772168 | orchestrator | changed: .d..t...... ./ 2025-05-05 02:00:49.791927 | 2025-05-05 02:00:49.792057 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-05 02:00:49.836381 | orchestrator | skipping: Conditional result was False 2025-05-05 02:00:49.848682 | orchestrator | skipping: Conditional result was False 2025-05-05 02:00:49.905477 | 2025-05-05 02:00:49.905596 | PLAY RECAP 2025-05-05 02:00:49.905654 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-05 02:00:49.905681 | 2025-05-05 02:00:50.029853 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-05 02:00:50.039128 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-05 02:00:50.749778 | 2025-05-05 02:00:50.749962 | PLAY [Base post] 2025-05-05 02:00:50.779279 | 2025-05-05 02:00:50.779420 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-05 02:00:51.786038 | orchestrator | changed 2025-05-05 02:00:51.821631 | 2025-05-05 02:00:51.821771 | PLAY RECAP 2025-05-05 02:00:51.821835 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-05 02:00:51.821900 | 2025-05-05 02:00:51.941042 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-05 02:00:51.944315 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-05 02:00:52.693519 | 2025-05-05 02:00:52.693695 | PLAY [Base post-logs] 2025-05-05 02:00:52.710069 | 2025-05-05 02:00:52.710201 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-05 02:00:53.183893 | localhost | changed 2025-05-05 02:00:53.191204 | 2025-05-05 02:00:53.191457 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-05 02:00:53.235375 | localhost | ok 2025-05-05 02:00:53.245883 | 2025-05-05 02:00:53.246022 | TASK [Set zuul-log-path fact] 2025-05-05 02:00:53.268463 | localhost | ok 2025-05-05 02:00:53.287873 | 2025-05-05 02:00:53.288031 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-05 02:00:53.330169 | localhost | ok 2025-05-05 02:00:53.341242 | 2025-05-05 02:00:53.341415 | TASK [upload-logs : Create log directories] 2025-05-05 02:00:53.837588 | localhost | changed 2025-05-05 02:00:53.846110 | 2025-05-05 02:00:53.846249 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-05 02:00:54.301214 | localhost -> localhost | ok: Runtime: 0:00:00.006550 2025-05-05 02:00:54.312835 | 2025-05-05 02:00:54.313002 | TASK [upload-logs : Upload logs to log server] 2025-05-05 02:00:54.832333 | localhost | Output suppressed because no_log was given 2025-05-05 02:00:54.837613 | 2025-05-05 02:00:54.837761 | LOOP [upload-logs : Compress console log and json output] 2025-05-05 02:00:54.905215 | localhost | skipping: Conditional result was False 2025-05-05 02:00:54.921009 | localhost | skipping: Conditional result was False 2025-05-05 02:00:54.933257 | 2025-05-05 02:00:54.933473 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-05 02:00:54.998814 | localhost | skipping: Conditional result was False 2025-05-05 02:00:54.999431 | 2025-05-05 02:00:55.012583 | localhost | skipping: Conditional result was False 2025-05-05 02:00:55.022853 | 2025-05-05 02:00:55.023027 | LOOP [upload-logs : Upload console log and json output]